Releases: huggingface/huggingface_hub
[v1.6.0] New CLI commands, Bucket fsspec support, and more
This release brings significant new CLI commands for managing Spaces, Datasets, Discussions, and Webhooks, along with HfFileSystem support for Buckets and a CLI extension system.
🚀 New CLI commands
We've added several new CLI command groups to make interacting with the Hub even easier from your terminal.
New hf spaces dev-mode command
You can now enable or disable dev mode on Spaces directly from the CLI. When enabling dev mode, the command waits for the Space to be ready and prints connection instructions (web VSCode, SSH, local VSCode/Cursor). This makes iterating on Spaces much faster by allowing you to restart your application without stopping the Space container.
# Enable dev mode
hf spaces dev-mode username/my-space
# Disable dev mode
hf spaces dev-mode username/my-space --stopNew hf discussions command group
You can now manage discussions and pull requests on the Hub directly from the CLI. This includes listing, viewing, creating, commenting on, closing, reopening, renaming, and merging discussions and PRs.
# List open discussions and PRs on a repo
hf discussions list username/my-model
# Create a new discussion
hf discussions create username/my-model --title "Feature request" --body "Description"
# Create a pull request
hf discussions create username/my-model --title "Fix bug" --pull-request
# Merge a pull request
hf discussions merge username/my-model 5 --yes- Add
hf discussionscommand group by @Wauplin in #3855 - Rename
hf discussions viewtohf discussions infoby @Wauplin in #3878
New hf webhooks command group
Full CLI support for managing Hub webhooks is now available. You can list, inspect, create, update, enable/disable, and delete webhooks directly from the terminal.
# List all webhooks
hf webhooks ls
# Create a webhook
hf webhooks create --url https://example.com/hook --watch model:bert-base-uncased
# Enable / disable a webhook
hf webhooks enable webhook_id
hf webhooks disable webhook_id
# Delete a webhook
hf webhooks delete webhook_id- Add
hf webhooksCLI commands by @omkar-334 in #3866
New hf datasets parquet and hf datasets sql commands
Two new commands make it easy to work with dataset parquet files. Use hf datasets parquet to discover parquet file URLs, then query them with hf datasets sql using DuckDB.
# List parquet URLs for a dataset
hf datasets parquet cfahlgren1/hub-stats
hf datasets parquet cfahlgren1/hub-stats --subset models --split train
# Run SQL queries on dataset parquet
hf datasets sql "SELECT COUNT(*) FROM read_parquet('https://huggingface.co/api/datasets/...')"- Add
hf datasets parquetandhf datasets sqlcommands by @cfahlgren1 in #3833
New hf repos duplicate command
You can now duplicate any repository (model, dataset, or Space) using a unified command. This replaces the previous duplicate_space method with a more general solution.
# Duplicate a Space
hf repos duplicate multimodalart/dreambooth-training --type space
# Duplicate a dataset
hf repos duplicate openai/gdpval --type dataset🪣 Bucket support in HfFileSystem
The HfFileSystem now supports buckets, providing S3-like object storage on Hugging Face. You can list, glob, download, stream, and upload files in buckets using the familiar fsspec interface.
from huggingface_hub import hffs
# List files in a bucket
hffs.ls("buckets/my-username/my-bucket/data")
# Read a remote file
with hffs.open("buckets/my-username/my-bucket/data/file.txt", "r") as f:
content = f.read()
# Read file content as string
hffs.read_text("buckets/my-username/my-bucket/data/file.txt")- Add bucket API support in HfFileSystem by @lhoestq in #3807
- Add docs on
hf://bucketsby @lhoestq in #3875 - Remove bucket warning in docs by @Wauplin in #3854
📦 Extensions now support pip install
The hf extensions system now supports installing extensions as Python packages in addition to standalone executables. This makes it easier to distribute and install CLI extensions.
# Install an extension
> hf extensions install hanouticelina/hf-claude
> hf extensions install alvarobartt/hf-mem
# List them
> hf extensions list
COMMAND SOURCE TYPE INSTALLED DESCRIPTION
--------- ----------------------- ------ ---------- -----------------------------------
hf claude hanouticelina/hf-claude binary 2026-03-06 Launch Claude Code with Hugging ...
hf mem alvarobartt/hf-mem python 2026-03-06 A CLI to estimate inference memo...
# Run extension
> hf claude --help
Usage: claude [options] [command] [prompt]
Claude Code - starts an interactive session by default, use -p/--print for non-interactive outputShow installed extensions in hf --help
The CLI now shows installed extensions under an "Extension commands" section in the help output.
- Show installed extensions in
hf --helpby @hanouticelina in #3884
Other QoL improvements
- Add NVIDIA provider support to InferenceClient by @manojkilaru97 in #3886
- Bump
hf_xetminimal package version to>=1.3.2for better throughput by @Wauplin in #3873 - Fix CLI errors formatting to include repo_id, repo_type, bucket_id by @Wauplin in #3889
📚 Documentation updates
- Fixed sub-headings for hf cache commands in the doc by @mostafatouny in #3877
🐛 Bug and typo fixes
- Fix: quote uv args in bash -c to prevent shell redirection by @XciD in #3857
- Fix typo in generated Skill by @hanouticelina in #3890
- Fix ty diagnostics in upload, filesystem, and repocard helpers by @hanouticelina in #3891
💔 Breaking changes
🏗️ Internal
- Release note skill attempt by @Wauplin in #3853
- Prepare for v1.6 by @Wauplin in #3860
- Skip git clone test by @Wauplin in #3881
- Add Sync
hfCLI Skill workflow by @hanouticelina in #3885 - [Release notes] doc diffs, better skill, concurrent fetching by @Wauplin in #3887
- Propagate filtered headers to xet by @bpronan in #3858
[v1.5.0]: Buckets API, Agent-first CLI, Spaces Hot-Reload and more
This release introduces major new features including Buckets (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.
🪣 Buckets: S3-like Object Storage on the Hub
Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.
# Create a bucket
hf buckets create my-bucket --private
# Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket
# Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data
# List files
hf buckets list username/my-bucket -R --treeThe Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.
- Buckets API and CLI by @Wauplin in #3673
- Support bucket rename/move in API + CLI by @Wauplin in #3843
- Add 'sync_bucket' to HfApi by @Wauplin in #3845
- hf buckets file deletion by @Wauplin in #3849
- Update message when no buckets found by @Wauplin in #3850
- Buckets doc
hfinstall by @julien-c in #3846
📚 Documentation: Buckets guide
🤖 AI Agent Support
This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):
- Centralized CLI error handling: Clean user-facing messages without tracebacks (set
HF_DEBUG=1for full traces) by @hanouticelina in #3754 - Token-efficient skill: The
hf skills addcommand now installs a compact skill (~1.2k tokens vs ~12k before) by @hanouticelina in #3802 - Agent-friendly
hf jobs logs: Prints available logs and exits by default; use-fto stream by @davanstrien in #3783 - Add AGENTS.md: Dev setup and codebase guide for AI agents by @Wauplin in #3789
# Install the hf-cli skill for Claude
hf skills add --claude
# Install for project-level
hf skills add --project- Add
hf skills addCLI command by @julien-c in #3741 hf skills addinstalls to central location with symlinks by @hanouticelina in #3755- Add Cursor skills support by @NielsRogge in #3810
🔥 Space Hot-Reload (Experimental)
Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.
# Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py
# Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py- feat(spaces): hot-reload by @cbensimon in #3776
- fix hot reload reference part.2 by @cbensimon in #3820
🖥️ CLI Improvements
New Commands
- Add
hf papers lsto list daily papers on the Hub by @julien-c in #3723 - Add
hf collectionscommands (ls, info, create, update, delete, add-item, update-item, delete-item) by @Wauplin in #3767
CLI Extensions
Introduce an extension mechanism to the hf CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by gh extension.
# Install an extension (defaults to huggingface org)
hf extensions install hf-claude
# Install from any GitHub owner
hf extensions install hanouticelina/hf-claude
# Run an extension
hf claude
# List installed extensions
hf extensions list- Add
hf extensionby @hanouticelina in #3805 - Add
hf extalias by @hanouticelina in #3836
Output Format Options
- Add
--format {table,json}and-q/--quiettohf models ls,hf datasets ls,hf spaces ls,hf endpoints lsby @hanouticelina in #3735 - Align
hf jobs psoutput with standard CLI pattern by @davanstrien in #3799 - Dynamic table columns based on
--expandfield by @hanouticelina in #3760
Usability
- Improve
hfCLI help output with examples and documentation links by @hanouticelina in #3743 - Add
-has short alias for--helpby @assafvayner in #3800 - Add hidden
--versionflag by @Wauplin in #3784 - Add
--typeas alias for--repo-typeby @Wauplin in #3835 - Better handling of aliases in documentation by @Wauplin in #3840
- Print first example only in group command --help by @Wauplin in #3841
- Subfolder download:
hf download repo_id subfolder/now works as expected by @Wauplin in #3822
Jobs CLI
List available hardware:
✗ hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
--------------- ---------------------- -------- ------- ----------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
cpu-performance CPU Performance 32 vCPU 256 GB N/A $0.3117 $18.70
cpu-xl CPU XL 16 vCPU 124 GB N/A $0.0167 $1.00
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
...Also added a ton of fixes and small QoL improvements.
- Support multi GPU training commands (
torchrun,accelerate launch) by @lhoestq in #3674 - Pass local script and config files to job by @lhoestq in #3724
- List available hardware with
hf jobs hardwareby @Wauplin in #3693 - Better jobs filtering in CLI: labels and negation (
!=) by @lhoestq in #3742 - Accept namespace/job_id format in jobs CLI commands by @davanstrien in #3811
- Pass namespace parameter to fetch job logs by @Praful932 in #3736
- Add more error handling output to hf jobs cli commands by @davanstrien in #3744
- Fix
hf jobscommands crashing without a TTY by @davanstrien in #3782
🤖 Inference
- Add
dimensions&encoding_formatparameter to InferenceClient for output embedding size by @mishig25 in #3671 - feat: zai-org provider supports text to image by @tomsun28 in #3675
- Fix fal image urls payload by @hanouticelina in #3746
- Fix Replicate
image-to-imagecompatibility with different model schemas by @hanouticelina in #3749 - Accelerator parameter support for inference endpoints by @Wauplin in #3817
🔧 Other QoL Improvements
- Support setting Label in Jobs API by @Wauplin in #3719
- Document built-in environment variables in Jobs docs (JOB_ID, ACCELERATOR, CPU_CORES, MEMORY) by @Wauplin in #3834
- Fix ReadTimeout crash in no-follow job logs by @davanstrien in #3793
- Add evaluation results module (
EvalResultEntry,parse_eval_result_entries) by @hanouticelina in #3633 - Add source org field to
EvalResultEntryby @hanouticelina in #3694 - Add limit param to list_papers API method by @Wauplin in #3697
- Add
num_papersfield to Organization class by @cfahlgren1 in #3695 - Update MAX_FILE_SIZE_GB from 50 to 200 by @davanstrien in #3696
- List datasets benchmark alias (
benchmark=True→benchmark="official") by @Wauplin in #3734 - Add notes field to
EvalResultEntryby @Wauplin in #3738 - Make
task_idrequired inEvalResultEntryby @Wauplin in #3718 - Repo commit count warning for
upload_large_folderby @Wauplin in #3698 - Replace deprecated is_enterprise boolean by
planstring in org info by @Wauplin in #3753 - Update hardware list in SpaceHardware enum by @lhoestq in #3756
- Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751
- No timeout by default when using httpx by @Wauplin in #3790
- Log 'x-amz-cf-id' on http error (if no request id) by @Wauplin in #3759
- Parse xet hash from tree listing by @seanses in #3780
- Require filelock>=3.10.0 for
mode=parameter support by @Wauplin in #3785 - Add overload decorators to
HfApi.snapshot_downloadfor dry_run typing by @Wauplin in #3788 - Dataclass doesn't call original
__init__by @zucchini-nlp in #3818 - Strict dataclass sequence validation by @Wauplin in #3819
- Check if
dataclass.repr=Truebefore wrapping by @zucchini-nlp in #3823
💔 Breaking Changes
hf jobs psremoves old Go-template--format '{{.id}}'syntax. Use-qfor IDs or--format json | jqfor custom extraction by @davanstrien in #3799- Migrate to
hf reposinstead ofhf repo(old command still works but shows deprecation warning) by @Wauplin in #3848 - Migrate
hf repo-files deletetohf repo delete-files(old command hidden from help, shows deprecation warning) by @Wauplin in #3821
🐛 Bug and typo fixes
- Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by @leq6c in #3685
- Fix endpoint not forwarded in CommitUrl by @Wauplin in #3679
- Fix
HfFileSystem.resolve_path()with special char@by @lhoestq in #3704 - Fix cache verify incorrectly reporting folders as missing files by @Mitix-EPI in #3707
- Fix multi user cache lock permissions by @hanouticelina in #3714
- Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @tomaarsen in #3737
- Filter datasets by benchmark:official by @Wauplin in #3761
- Fix file corruption when server ignores Range header on download retry by @XciD in #3778
- Fix Xet token invalid on repo recreation...
[v1.4.1] Fix file corruption when server ignores Range header on download retry
Fix file corruption when server ignores Range header on download retry.
Full details in #3778 by @XciD.
Full Changelog: v1.4.0...v1.4.1
[v0.36.2] Fix file corruption when server ignores Range header on download retry
Fix file corruption when server ignores Range header on download retry.
Full details in #3778 by @XciD.
Full Changelog: v0.36.1...v0.36.2
[v1.4.0] Building the HF CLI for You and your AI Agents
🧠 hf skills add CLI Command
A new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.
> hf skills add --help
Usage: hf skills add [OPTIONS]
Download a skill and install it for an AI assistant.
Options:
--claude Install for Claude.
--codex Install for Codex.
--opencode Install for OpenCode.
-g, --global Install globally (user-level) instead of in the current
project directory.
--dest PATH Install into a custom destination (path to skills directory).
--force Overwrite existing skills in the destination.
--help Show this message and exit.
Examples
$ hf skills add --claude
$ hf skills add --claude --global
$ hf skills add --codex --opencode
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cliThe skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/ → ../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.
- Add
hf skills addCLI command by @julien-c in #3741 - [CLI]
hf skills addinstalls hf-cli skill to central location with symlinks by @hanouticelina in #3755
🖥️ Improved CLI Help Output
The CLI help output has been reorganized to be more informative and agent-friendly:
- Commands are now grouped into Main commands and Help commands
- Examples section showing common usage patterns
- Learn more section with links to documentation
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...
Manage local cache directory.
Options:
--help Show this message and exit.
Main commands:
ls List cached repositories or revisions.
prune Remove detached revisions from the cache.
rm Remove cached repositories or revisions.
verify Verify checksums for a single repo revision from cache or a local
directory.
Examples
$ hf cache ls
$ hf cache ls --revisions
$ hf cache ls --filter "size>1GB" --limit 20
$ hf cache ls --format json
$ hf cache prune
$ hf cache prune --dry-run
$ hf cache rm model/gpt2
$ hf cache rm <revision_hash>
$ hf cache rm model/gpt2 --dry-run
$ hf cache rm model/gpt2 --yes
$ hf cache verify gpt2
$ hf cache verify gpt2 --revision refs/pr/1
$ hf cache verify my-dataset --repo-type dataset
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli- [CLI] improve
hfCLI help output by @hanouticelina in #3743
📊 Evaluation Results Module
The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.
We added helpers in huggingface_hub to work with this format:
EvalResultEntrydataclass representing evaluation scoreseval_result_entries_to_yaml()to serialize entries to YAML formatparse_eval_result_entries()to parse YAML data back intoEvalResultEntryobjects
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file
entries = [
EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
path_or_fileobj=yaml_content.encode(),
path_in_repo=".eval_results/results.yaml",
repo_id="your-username/your-model",
)- Add evaluation results module by @hanouticelina in #3633
- Eval results synchronization by @Wauplin in #3718
- Eval results notes by @Wauplin in #3738
🖥️ Other CLI Improvements
New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.
hf papers ls # List most recent daily papers
hf papers ls --sort=trending # List trending papers
hf papers ls --date=2025-01-23 # List papers from a specific date
hf papers ls --date=today # List today's papersNew hf collections commands for managing collections from the CLI:
# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending
# Create a collection
hf collections create "My Models" --description "Favorites" --private
# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"
# Get info
hf collections info user/my-coll
# Delete
hf collections delete user/my-collOther CLI-related improvements:
- [CLI] output format option for ls CLIs by @hanouticelina in #3735
- [CLI] Dynamic table columns based on
--expandfield by @hanouticelina in #3760 - [CLI] Adds centralized error handling by @hanouticelina in #3754
- [CLI] exception handling scope by @hanouticelina in #3748
- Update CLI help output in docs to include new commands by @julien-c in #3722
📊 Jobs
Multi-GPU training commands are now supported with torchrun and accelerate launch:
> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.pyYou can also pass local config files alongside your scripts:
> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.ymlNew hf jobs hardware command to list available hardware options:
> hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
------------ ---------------------- -------- ------- ---------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
a10g-large Nvidia A10G - large 12 vCPU 46 GB 1x A10G (24 GB) $0.0250 $1.50
a10g-largex2 2x Nvidia A10G - large 24 vCPU 92 GB 2x A10G (48 GB) $0.0500 $3.00
a10g-largex4 4x Nvidia A10G - large 48 vCPU 184 GB 4x A10G (96 GB) $0.0833 $5.00
a100-large Nvidia A100 - large 12 vCPU 142 GB 1x A100 (80 GB) $0.0417 $2.50
a100x4 4x Nvidia A100 48 vCPU 568 GB 4x A100 (320 GB) $0.1667 $10.00
a100x8 8x Nvidia A100 96 vCPU 1136 GB 8x A100 (640 GB) $0.3333 $20.00
l4x1 1x Nvidia L4 8 vCPU 30 GB 1x L4 (24 GB) $0.0133 $0.80
l4x4 4x Nvidia L4 48 vCPU 186 GB 4x L4 (96 GB) $0.0633 $3.80
l40sx1 1x Nvidia L40S 8 vCPU 62 GB 1x L40S (48 GB) $0.0300 $1.80
l40sx4 4x Nvidia L40S 48 vCPU 382 GB 4x L40S (192 GB) $0.1383 $8.30
l40sx8 8x Nvidia L40S 192 vCPU 1534 GB 8x L40S (384 GB) $0.3917 $23.50 Better filtering with label support and negation:
> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B- [Jobs] Support multi gpu training commands by @lhoestq in #3674
- [Jobs] List available hardware by @Wauplin in #3693
- [Jobs] Better jobs filtering in CLI: labels and negation by @lhoestq in #3742
- Pass local script and config files to job by @lhoestq in #3724
- Support setting Label in Jobs API by @Wauplin in #3719
- Pass namespace parameter to fetch job logs in jobs CLI by @Praful932 in #3736
- Add more error handling output to hf jobs cli commands by @davanstrien in #3744
⚡️ Inference
- Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
- feat: zai-org provider supports text to image by @tomsun28 in #3675
- [Inference Providers] fix fal image urls payload by @hanouticelina in #3746
- Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749
🔧 QoL Improvements
- add source org field by @hanouticelina in #3694
- add num_papers field to Organization class by @cfahlgren1 in #3695
- Add limit param to list_papers API method by @Wauplin in #3697
- Repo commit count warning by @Wauplin in #3698
- List datasets benchmark alias by @Wauplin in #3734
- List repo files repoType by @Wauplin in #3753
- Update hardware list in SpaceHardware enum by @lhoestq in #3756
- Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751
- Default _endpoint to None in CommitInfo by @tomaarsen in #37...
[v1.3.7] Log 'x-amz-cf-id' on http error if no request id
Log 'x-amz-cf-id' on http error (if no request id) (#3759)
Full Changelog: v1.3.5...v1.3.7
[v1.3.5] Configurable default timeout for HTTP calls
Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set HF_HUB_DOWNLOAD_TIMEOUT=60 as environment variable in these cases.
Full Changelog: v1.3.4...v1.3.5
[v1.3.4] Fix `CommitUrl._endpoint` default to None
- Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @tomaarsen in #3737
Full Changelog: v1.3.3...v1.3.4
[v1.3.3] List Jobs Hardware & Bug Fixes
⚙️ List Jobs Hardware
You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.
From the CLI:
➜ hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
--------------- ---------------------- -------- ------- ---------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
cpu-performance CPU Performance 8 vCPU 32 GB N/A $0.0000 $0.00
cpu-xl CPU XL 16 vCPU 124 GB N/A $0.0000 $0.00
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
a10g-large Nvidia A10G - large 12 vCPU 46 GB 1x A10G (24 GB) $0.0250 $1.50
a10g-largex2 2x Nvidia A10G - large 24 vCPU 92 GB 2x A10G (48 GB) $0.0500 $3.00
a10g-largex4 4x Nvidia A10G - large 48 vCPU 184 GB 4x A10G (96 GB) $0.0833 $5.00
a100-large Nvidia A100 - large 12 vCPU 142 GB 1x A100 (80 GB) $0.0417 $2.50
a100x4 4x Nvidia A100 48 vCPU 568 GB 4x A100 (320 GB) $0.1667 $10.00
a100x8 8x Nvidia A100 96 vCPU 1136 GB 8x A100 (640 GB) $0.3333 $20.00
l4x1 1x Nvidia L4 8 vCPU 30 GB 1x L4 (24 GB) $0.0133 $0.80
l4x4 4x Nvidia L4 48 vCPU 186 GB 4x L4 (96 GB) $0.0633 $3.80
l40sx1 1x Nvidia L40S 8 vCPU 62 GB 1x L40S (48 GB) $0.0300 $1.80
l40sx4 4x Nvidia L40S 48 vCPU 382 GB 4x L40S (192 GB) $0.1383 $8.30
l40sx8 8x Nvidia L40S 192 vCPU 1534 GB 8x L40S (384 GB) $0.3917 $23.50 Programmatically:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'🐛 Bug Fixes
- Fix severe performance regression in streaming by keeping a byte iterator in
HfFileSystemStreamFilein #3685 by @leq6c - Fix verify incorrectly reporting folders as missing files in #3707 by @Mitix-EPI
- Fix
resolve_path() with special char @ in #3704 by @lhoestq - Fix curlify with streaming request in #3692 by @hanouticelina
✨ Various Improvements
- Add
num_papersfield to Organization class in #3695 by @cfahlgren1 - Add
limitparam tolist_papersAPI method in #3697 by @Wauplin - Add repo commit count warning when exceeding recommended limits in #3698 by @Wauplin
- Update
MAX_FILE_SIZE_GBfrom 50 to 200 GB in #3696 by @davanstrien
📚 Documentation
- Wildcard pattern documentation in #3710 by @hanouticelina
[v1.3.2] Zai provider support for `text-to-image` and fix custom endpoint not forwarded
- Fix endpoint not forwarded in CommitUrl #3679 by @Wauplin
- feat: zai-org provider supports text to image #3675 by @tomsun28
Full Changelog: v1.3.1...v1.3.2