Bump huggingface-hub from 0.17.3 to 0.31.1#1230
Open
dependabot[bot] wants to merge 1 commit intomainfrom
Open
Conversation
Bumps [huggingface-hub](https://github.com/huggingface/huggingface_hub) from 0.17.3 to 0.31.1. - [Release notes](https://github.com/huggingface/huggingface_hub/releases) - [Commits](huggingface/huggingface_hub@v0.17.3...v0.31.1) --- updated-dependencies: - dependency-name: huggingface-hub dependency-version: 0.31.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
Contributor
Reviewer's GuideThis pull request upgrades the Sequence Diagram: InferenceClient with 'auto' Provider SelectionsequenceDiagram
actor Developer
participant IC as InferenceClient
participant HFSettings as Hugging Face Settings
participant SelectedProvider as Auto-Selected Provider
Developer->>IC: Initialize InferenceClient(provider="auto" or default)
Developer->>IC: Make inference request (e.g., chat_completions.create(...))
IC->>HFSettings: Fetch user's preferred provider order (for the model)
HFSettings-->>IC: Provider order list
IC->>IC: Select first available/compatible provider from list
IC->>SelectedProvider: Forward inference request
SelectedProvider-->>IC: Inference result
IC-->>Developer: Return inference result
Sequence Diagram: LoRA Inference via InferenceClient with fal.ai/ReplicatesequenceDiagram
actor Developer
participant IC as InferenceClient
participant LoRAProvider as "fal.ai / Replicate"
Developer->>IC: Initialize InferenceClient(provider="fal-ai" or "replicate")
Developer->>IC: Call client.text_to_image("A cute cat", model="lora_model_id")
IC->>LoRAProvider: Request text-to-image with LoRA model
LoRAProvider-->>IC: Image data
IC-->>Developer: Return PIL.Image object
Class Diagram: Updates to InferenceClient and AsyncInferenceClient in huggingface-hubclassDiagram
class InferenceClient {
+provider: string
+__init__(self, provider: string = "auto", ...)
+text_to_image(self, prompt: string, model: string, ...) : PIL.Image
+feature_extraction(self, ...) : Embeddings
}
note for InferenceClient "Default for 'provider' in __init__ changed to 'auto' (was 'hf-inference').\nNew capabilities: LoRA inference (via text_to_image with fal.ai/Replicate) and embeddings (via feature_extraction with Sambanova)."
class AsyncInferenceClient {
+provider: string
+__init__(self, provider: string = "auto", ...)
# Methods analogous to InferenceClient, supporting new provider features
}
note for AsyncInferenceClient "Default for 'provider' in __init__ changed to 'auto' (was 'hf-inference')."
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bumps huggingface-hub from 0.17.3 to 0.31.1.
Release notes
Sourced from huggingface-hub's releases.
... (truncated)
Commits
13a3991Release: v0.31.1e2277d1fix conda (#3058)153f159Release: v0.31.08348905Release: v0.31.0.rc0ca7342dMigrate tologger.warningusage (#3056)df9f47bsupport loras with replicate (#3054)1e40c62[Inference Providers] fix inference with URL endpoints (#3041)caeaeebXet Upload with byte array (#3035)2bafd2aAdd the 'env' parameter to creating/updating Inference Endpoints (#3045)fff83afUpdate inference types (automated commit) (#3051)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Summary by Sourcery
Chores: