Skip to content

Releases: huggingface/huggingface_hub

[v1.6.0] New CLI commands, Bucket fsspec support, and more

06 Mar 13:52
7536357

Choose a tag to compare

This release brings significant new CLI commands for managing Spaces, Datasets, Discussions, and Webhooks, along with HfFileSystem support for Buckets and a CLI extension system.

🚀 New CLI commands

We've added several new CLI command groups to make interacting with the Hub even easier from your terminal.

New hf spaces dev-mode command

You can now enable or disable dev mode on Spaces directly from the CLI. When enabling dev mode, the command waits for the Space to be ready and prints connection instructions (web VSCode, SSH, local VSCode/Cursor). This makes iterating on Spaces much faster by allowing you to restart your application without stopping the Space container.

# Enable dev mode
hf spaces dev-mode username/my-space

# Disable dev mode
hf spaces dev-mode username/my-space --stop

New hf discussions command group

You can now manage discussions and pull requests on the Hub directly from the CLI. This includes listing, viewing, creating, commenting on, closing, reopening, renaming, and merging discussions and PRs.

# List open discussions and PRs on a repo
hf discussions list username/my-model

# Create a new discussion
hf discussions create username/my-model --title "Feature request" --body "Description"

# Create a pull request
hf discussions create username/my-model --title "Fix bug" --pull-request

# Merge a pull request
hf discussions merge username/my-model 5 --yes

New hf webhooks command group

Full CLI support for managing Hub webhooks is now available. You can list, inspect, create, update, enable/disable, and delete webhooks directly from the terminal.

# List all webhooks
hf webhooks ls

# Create a webhook
hf webhooks create --url https://example.com/hook --watch model:bert-base-uncased

# Enable / disable a webhook
hf webhooks enable webhook_id
hf webhooks disable webhook_id

# Delete a webhook
hf webhooks delete webhook_id

New hf datasets parquet and hf datasets sql commands

Two new commands make it easy to work with dataset parquet files. Use hf datasets parquet to discover parquet file URLs, then query them with hf datasets sql using DuckDB.

# List parquet URLs for a dataset
hf datasets parquet cfahlgren1/hub-stats
hf datasets parquet cfahlgren1/hub-stats --subset models --split train

# Run SQL queries on dataset parquet
hf datasets sql "SELECT COUNT(*) FROM read_parquet('https://huggingface.co/api/datasets/...')"

New hf repos duplicate command

You can now duplicate any repository (model, dataset, or Space) using a unified command. This replaces the previous duplicate_space method with a more general solution.

# Duplicate a Space
hf repos duplicate multimodalart/dreambooth-training --type space

# Duplicate a dataset
hf repos duplicate openai/gdpval --type dataset
  • Add duplicate_repo method and hf repos duplicate command by @Wauplin in #3880

🪣 Bucket support in HfFileSystem

The HfFileSystem now supports buckets, providing S3-like object storage on Hugging Face. You can list, glob, download, stream, and upload files in buckets using the familiar fsspec interface.

from huggingface_hub import hffs

# List files in a bucket
hffs.ls("buckets/my-username/my-bucket/data")

# Read a remote file
with hffs.open("buckets/my-username/my-bucket/data/file.txt", "r") as f:
    content = f.read()

# Read file content as string
hffs.read_text("buckets/my-username/my-bucket/data/file.txt")

📦 Extensions now support pip install

The hf extensions system now supports installing extensions as Python packages in addition to standalone executables. This makes it easier to distribute and install CLI extensions.

# Install an extension
> hf extensions install hanouticelina/hf-claude
> hf extensions install alvarobartt/hf-mem

# List them
> hf extensions list
COMMAND   SOURCE                  TYPE   INSTALLED  DESCRIPTION                        
--------- ----------------------- ------ ---------- -----------------------------------
hf claude hanouticelina/hf-claude binary 2026-03-06 Launch Claude Code with Hugging ...
hf mem    alvarobartt/hf-mem      python 2026-03-06 A CLI to estimate inference memo...

# Run extension
> hf claude --help
Usage: claude [options] [command] [prompt]

Claude Code - starts an interactive session by default, use -p/--print for non-interactive output
  • Add pip installable repos support to hf extensions by @Wauplin in #3892

Show installed extensions in hf --help

The CLI now shows installed extensions under an "Extension commands" section in the help output.

Other QoL improvements

  • Add NVIDIA provider support to InferenceClient by @manojkilaru97 in #3886
  • Bump hf_xet minimal package version to >=1.3.2 for better throughput by @Wauplin in #3873
  • Fix CLI errors formatting to include repo_id, repo_type, bucket_id by @Wauplin in #3889

📚 Documentation updates

🐛 Bug and typo fixes

💔 Breaking changes

  • Remove deprecated direction argument in list_models/datasets/spaces by @Wauplin in #3882

🏗️ Internal

[v1.5.0]: Buckets API, Agent-first CLI, Spaces Hot-Reload and more

26 Feb 15:02
2b20726

Choose a tag to compare

This release introduces major new features including Buckets (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.

🪣 Buckets: S3-like Object Storage on the Hub

Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.

# Create a bucket
hf buckets create my-bucket --private

# Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket

# Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data

# List files
hf buckets list username/my-bucket -R --tree

The Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.

📚 Documentation: Buckets guide

🤖 AI Agent Support

This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):

  • Centralized CLI error handling: Clean user-facing messages without tracebacks (set HF_DEBUG=1 for full traces) by @hanouticelina in #3754
  • Token-efficient skill: The hf skills add command now installs a compact skill (~1.2k tokens vs ~12k before) by @hanouticelina in #3802
  • Agent-friendly hf jobs logs: Prints available logs and exits by default; use -f to stream by @davanstrien in #3783
  • Add AGENTS.md: Dev setup and codebase guide for AI agents by @Wauplin in #3789
# Install the hf-cli skill for Claude
hf skills add --claude

# Install for project-level
hf skills add --project

🔥 Space Hot-Reload (Experimental)

Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.

# Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py

# Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py

🖥️ CLI Improvements

New Commands

  • Add hf papers ls to list daily papers on the Hub by @julien-c in #3723
  • Add hf collections commands (ls, info, create, update, delete, add-item, update-item, delete-item) by @Wauplin in #3767

CLI Extensions

Introduce an extension mechanism to the hf CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by gh extension.

# Install an extension (defaults to huggingface org)
hf extensions install hf-claude

# Install from any GitHub owner
hf extensions install hanouticelina/hf-claude

# Run an extension
hf claude

# List installed extensions
hf extensions list

Output Format Options

  • Add --format {table,json} and -q/--quiet to hf models ls, hf datasets ls, hf spaces ls, hf endpoints ls by @hanouticelina in #3735
  • Align hf jobs ps output with standard CLI pattern by @davanstrien in #3799
  • Dynamic table columns based on --expand field by @hanouticelina in #3760

Usability

Jobs CLI

List available hardware:

✗ hf jobs hardware
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR       COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ----------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A               $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A               $0.0005  $0.03     
cpu-performance CPU Performance        32 vCPU  256 GB  N/A               $0.3117  $18.70    
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A               $0.0167  $1.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)     $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)     $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)   $0.0167  $1.00  
...

Also added a ton of fixes and small QoL improvements.

🤖 Inference

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
  • feat: zai-org provider supports text to image by @tomsun28 in #3675
  • Fix fal image urls payload by @hanouticelina in #3746
  • Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749
  • Accelerator parameter support for inference endpoints by @Wauplin in #3817

🔧 Other QoL Improvements

💔 Breaking Changes

  • hf jobs ps removes old Go-template --format '{{.id}}' syntax. Use -q for IDs or --format json | jq for custom extraction by @davanstrien in #3799
  • Migrate to hf repos instead of hf repo (old command still works but shows deprecation warning) by @Wauplin in #3848
  • Migrate hf repo-files delete to hf repo delete-files (old command hidden from help, shows deprecation warning) by @Wauplin in #3821

🐛 Bug and typo fixes

  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by @leq6c in #3685
  • Fix endpoint not forwarded in CommitUrl by @Wauplin in #3679
  • Fix HfFileSystem.resolve_path() with special char @ by @lhoestq in #3704
  • Fix cache verify incorrectly reporting folders as missing files by @Mitix-EPI in #3707
  • Fix multi user cache lock permissions by @hanouticelina in #3714
  • Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @tomaarsen in #3737
  • Filter datasets by benchmark:official by @Wauplin in #3761
  • Fix file corruption when server ignores Range header on download retry by @XciD in #3778
  • Fix Xet token invalid on repo recreation...
Read more

[v1.4.1] Fix file corruption when server ignores Range header on download retry

06 Feb 09:28
5035d73

Choose a tag to compare

Fix file corruption when server ignores Range header on download retry.
Full details in #3778 by @XciD.

Full Changelog: v1.4.0...v1.4.1

[v0.36.2] Fix file corruption when server ignores Range header on download retry

06 Feb 09:29
664c484

Choose a tag to compare

Fix file corruption when server ignores Range header on download retry.
Full details in #3778 by @XciD.

Full Changelog: v0.36.1...v0.36.2

[v1.4.0] Building the HF CLI for You and your AI Agents

03 Feb 16:19

Choose a tag to compare

🧠 hf skills add CLI Command

A new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.

> hf skills add --help
Usage: hf skills add [OPTIONS]

  Download a skill and install it for an AI assistant.

Options:
  --claude      Install for Claude.
  --codex       Install for Codex.
  --opencode    Install for OpenCode.
  -g, --global  Install globally (user-level) instead of in the current
                project directory.
  --dest PATH   Install into a custom destination (path to skills directory).
  --force       Overwrite existing skills in the destination.
  --help        Show this message and exit.

Examples
  $ hf skills add --claude
  $ hf skills add --claude --global
  $ hf skills add --codex --opencode

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.

🖥️ Improved CLI Help Output

The CLI help output has been reorganized to be more informative and agent-friendly:

  • Commands are now grouped into Main commands and Help commands
  • Examples section showing common usage patterns
  • Learn more section with links to documentation
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...

  Manage local cache directory.

Options:
  --help  Show this message and exit.

Main commands:
  ls      List cached repositories or revisions.
  prune   Remove detached revisions from the cache.
  rm      Remove cached repositories or revisions.
  verify  Verify checksums for a single repo revision from cache or a local
          directory.

Examples
  $ hf cache ls
  $ hf cache ls --revisions
  $ hf cache ls --filter "size>1GB" --limit 20
  $ hf cache ls --format json
  $ hf cache prune
  $ hf cache prune --dry-run
  $ hf cache rm model/gpt2
  $ hf cache rm <revision_hash>
  $ hf cache rm model/gpt2 --dry-run
  $ hf cache rm model/gpt2 --yes
  $ hf cache verify gpt2
  $ hf cache verify gpt2 --revision refs/pr/1
  $ hf cache verify my-dataset --repo-type dataset

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

📊 Evaluation Results Module

The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.

We added helpers in huggingface_hub to work with this format:

  • EvalResultEntry dataclass representing evaluation scores
  • eval_result_entries_to_yaml() to serialize entries to YAML format
  • parse_eval_result_entries() to parse YAML data back into EvalResultEntry objects
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file

entries = [
    EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
    EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
    path_or_fileobj=yaml_content.encode(),
    path_in_repo=".eval_results/results.yaml",
    repo_id="your-username/your-model",
)

🖥️ Other CLI Improvements

New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.

hf papers ls                       # List most recent daily papers
hf papers ls --sort=trending       # List trending papers
hf papers ls --date=2025-01-23     # List papers from a specific date
hf papers ls --date=today          # List today's papers

New hf collections commands for managing collections from the CLI:

# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending

# Create a collection
hf collections create "My Models" --description "Favorites" --private

# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"

# Get info
hf collections info user/my-coll

# Delete
hf collections delete user/my-coll

Other CLI-related improvements:

📊 Jobs

Multi-GPU training commands are now supported with torchrun and accelerate launch:

> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py

You can also pass local config files alongside your scripts:

> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml

New hf jobs hardware command to list available hardware options:

> hf jobs hardware
NAME         PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
------------ ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic    CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade  CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
t4-small     Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium    Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small   Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large   Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2 2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4 4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large   Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4       4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8       8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1         1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4         4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1       1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4       4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8       8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50  

Better filtering with label support and negation:

> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B

⚡️ Inference

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
  • feat: zai-org provider supports text to image by @tomsun28 in #3675
  • [Inference Providers] fix fal image urls payload by @hanouticelina in #3746
  • Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749

🔧 QoL Improvements

Read more

[v1.3.7] Log 'x-amz-cf-id' on http error if no request id

02 Feb 10:57
0d8d045

Choose a tag to compare

Log 'x-amz-cf-id' on http error (if no request id) (#3759)

Full Changelog: v1.3.5...v1.3.7

[v1.3.5] Configurable default timeout for HTTP calls

29 Jan 13:48
95a6f2f

Choose a tag to compare

  • Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751

Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set HF_HUB_DOWNLOAD_TIMEOUT=60 as environment variable in these cases.

Full Changelog: v1.3.4...v1.3.5

[v1.3.4] Fix `CommitUrl._endpoint` default to None

26 Jan 14:06
875cfd4

Choose a tag to compare

  • Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @tomaarsen in #3737

Full Changelog: v1.3.3...v1.3.4

[v1.3.3] List Jobs Hardware & Bug Fixes

22 Jan 14:09

Choose a tag to compare

⚙️ List Jobs Hardware

You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.

From the CLI:

hf jobs hardware                           
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
cpu-performance CPU Performance        8 vCPU   32 GB   N/A              $0.0000  $0.00     
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A              $0.0000  $0.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large      Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2    2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4    4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large      Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4          4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8          8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1            1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4            4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1          1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4          4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8          8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50 

Programmatically:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'

🐛 Bug Fixes

  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile in #3685 by @leq6c
  • Fix verify incorrectly reporting folders as missing files in #3707 by @Mitix-EPI
  • Fix resolve_path() with special char @ in #3704 by @lhoestq
  • Fix curlify with streaming request in #3692 by @hanouticelina

✨ Various Improvements

📚 Documentation

[v1.3.2] Zai provider support for `text-to-image` and fix custom endpoint not forwarded

14 Jan 14:09

Choose a tag to compare

Full Changelog: v1.3.1...v1.3.2