Skip to content

Releases: huggingface/huggingface_hub

[v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update

25 Jun 12:18

Choose a tag to compare

Full Changelog: v0.33.0...v0.33.1

This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:

  • [Tiny agents] Add tool call to messages #3159 by @NielsRogge
  • fix: update payload preparation to merge parameters into the output dictionary #3160 by @mishig25
  • fix(inference_endpoints): use GET healthRoute instead of GET / to check status #3165 by @mfuntowicz
  • Recursive filter_none in Inference Providers #3178 by @Wauplin

[v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!

11 Jun 14:14
d5dff4e

Choose a tag to compare

⚡ New provider: Featherless.AI

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="featherless-ai")

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528", 
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ], 
)

print(completion.choices[0].message)
  • ✨ Support for Featherless.ai as inference provider by @pohnean in #3081

⚡ New provider: Groq

At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.

Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="groq")

completion = client.chat.completions.create(
    model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in one sentence."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
                },
            ],
        }
    ],
)

print(completion.choices[0].message)

🤖 MCP and Tiny-agents

It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!

Fixing some DX issues in the tiny-agents CLI.

📚 Documentation

New translation from the Hindi-speaking community, for the community!

  • Added Hindi translation for git_vs_http.md in concepts section by @february-king in #3156

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @pohnean
    • ✨ Support for Featherless.ai as inference provider (#3081)
  • @february-king
    • Added Hindi translation for git_vs_http.md in concepts section (#3156)

[v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid

11 Jun 08:18
f498b42

Choose a tag to compare

[v0.32.5] [Tiny-Agents] inject environment variables in headers

10 Jun 16:04
8dfb199

Choose a tag to compare

  • Inject env var in headers + better type annotations #3142

Full Changelog: v0.32.4...v0.32.5

[v0.32.4]: Bug fixes in `tiny-agents`, and fix input handling for question-answering task.

03 Jun 10:04

Choose a tag to compare

Full Changelog: v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

[v0.32.3]: Handle env variables in `tiny-agents`, better CLI exit and handling of MCP tool calls arguments

30 May 08:29

Choose a tag to compare

Full Changelog: v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #3129
  • [Fix] tiny-agents cli exit issues #3125
  • Improve Handling of MCP Tool Call Arguments #3127

[v0.32.2]: Add endpoint support in Tiny-Agent + fix `snapshot_download` on large repos

27 May 09:24
6dd0164

Choose a tag to compare

Full Changelog: v0.32.1...v0.32.2

  • [MCP] Add local/remote endpoint inference support #3121
  • Fix snapshot_download on very large repo (>50k files) #3122

[v0.32.1]: hot-fix: Fix tiny agents on Windows

26 May 09:53

Choose a tag to compare

[v0.32.0]: MCP Client, Tiny Agents CLI and more!

22 May 21:38

Choose a tag to compare

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U huggingface_hub[mcp]

In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:

import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient

async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]
        async for chunk in client.process_single_turn_with_tools(messages):
            # Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

            # Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.

You can run these Agents directly from the command line:

> tiny-agents run --help
                                                                                                                                                                                     
 Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...                                                                                                                           
                                                                                                                                                                                     
 Run the Agent in the CLI                                                                                                                                                            
                                                                                                                                                                                     
                                                                                                                                                                                     
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│   path      [PATH]  Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset                         │
│                     (https://huggingface.co/datasets/tiny-agents/tiny-agents)                                                                                                     │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help          Show this message and exit.                                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.

This is an early version of the MCPClient, and community contributions are welcome 🤗

⚡ Inference Providers

Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!

  • [Inference Providers] Add feature extraction task for Nebius by @diadorer in #3057

We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥

  • 🗿 adding support for Nscale inference provider by @nbarr07 in #3068

We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.

  • [Inference Providers] Fix structured output schema in chat completion by @hanouticelina in #3082

💾 Serialization

We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:

from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

# Custom validator to ensure a value is positive
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")


class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

    # Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")

config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError

# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError

This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.

  • New @strict decorator for dataclass validation by @Wauplin in #2895

This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.

✨ HF API

When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.

  • Dont set scale to zero as default when creating an Endpoint by @tomaarsen in #3062

We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.

  • Add helpers to handle OAuth in a FastAPI app by @Wauplin in #2684

📚 Documentation

We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).

  • [Inference] Mention local endpoints inference + remove separate HF Inference API mentions by @hanouticelina in #3085

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

  • [Internal] make hf-xet (again) a required dependency #3103
  • fix conda by @han...
Read more

[v0.31.4]: strict dataclasses, support `DTensor` saving & some bug fixes

19 May 09:48

Choose a tag to compare

This release includes some new features and bug fixes:

  • New strict decorators for runtime dataclass validation with custom and type-based checks. by @Wauplin in #2895.
  • Added DTensor support to _get_unique_id / get_torch_storage_size helpers, enabling transformers to use save_pretrained with DTensor. by @S1ro1 in #3042.
  • Some bug fixes: #3080 & #3076.

Full Changelog: v0.31.2...v0.31.4