The Hugging Face Hub is the go-to platform for sharing machine learning models. A well-executed release can boost your model's visibility and impact. This section covers essential steps for a concise, informative, and user-friendly model release.
When uploading models to the Hub, follow these best practices:
-
Use separate repositories for different model weights: Create individual repositories for each variant of the same architecture. This lets you group them into a collection, which users prefer when browsing many options. It also improves visibility because each model has its own URL (
hf.co/org/model-name) and is easier to search. -
Prefer
safetensorsoverpicklefor weight serialization.:safetensorsis safer and often faster than Python’spickle. If you have a.binpickle file, use the weight conversion tool to convert it quickly.
A well-crafted model card (the README.md in your repository) is essential for discoverability, reproducibility, and effective sharing. Include:
-
Metadata Configuration: The metadata section at the top of your model card (YAML) is key for search and categorization. Include:
--- pipeline_tag: text-generation # Specify the task library_name: transformers # Specify the library language: - en # List language for your model license: apache-2.0 # Specify a license datasets: - username/dataset # List datasets used for training base_model: username/base-model # If applicable ---
Create the
README.mdin the Web UI, and you’ll see a form with the most important metadata fields we recommend 🤗.
Metadata Form on the Hub UI -
Detailed Model Description: Provide a clear explanation of what your model does, its architecture, and its intended use cases. Help users quickly decide if it fits their needs.
-
Usage Examples: Provide clear, copy-and-run code snippets for inference, fine-tuning, or other common tasks. Keep edits needed by users to a minimum.
Bonus: Add a well-structured
notebook.ipynbin the repo so users can open it in Google Colab and Kaggle Notebooks directly.
Google and Kaggle Usage Buttons -
Technical Specifications: Include training parameters, hardware needs, and other details that help users run the model effectively.
-
Performance Metrics: Share benchmarks and evaluation results. Include quantitative metrics and qualitative examples to show strengths and limitations.
-
Limitations and Biases: Document known limitations, biases, and ethical considerations so users can make informed choices.
To make the process more seamless, click Import model card template to pre-fill the README.mds with placeholders.
![]() |
![]() |
|---|---|
| The button to import the model card template | A section of the imported template |
To maximize reach and usability::
- Library Integration:
Add support for one of the many libraries integrated with the Hugging Face Hub (such as
transformers,diffusers,sentence-transformers,timm). This integration significantly increases your model's accessibility and provides users with code snippets for working with your model.
To specify that your model works with the transformers library:
---
library_name: transformers
---![]() |
|---|
| Code snippet tab |
You can also create your own model library or add Hub support to another existing library or codebase.
We wrote an extensive guide on uploading best practices here.
Note
A recognised library also allows you to track downloads of your model over time.
-
Correct Metadata:
- Pipeline Tag: Choose the correct pipeline tag so your model shows up in the right searches and widgets.
Examples of common pipeline tags:
-
text-generation- For language models that generate text -
text-to-image- For text-to-image generation models -
image-text-to-text- For vision-language models (VLMs) that generate text -
text-to-speech- For models that generate audio from text -
License: Add a license so users know how they can use the model.
-
Research Papers: If your model has associated papers, cite them in the model card. They will belinked automatically.
## References * [Model Paper](https://arxiv.org/abs/xxxx.xxxxx)
-
Collections: If you're releasing multiple related models or variants, organize them into a collection. Collections help users discover related models and understand relationships across versions.
-
Demos: Create a Hugging Face Space with an interactive demo. This lets users try your model without writing code. You can also link the model from the Space to make it appear on the model page UI.
## Demo Try this model directly in your browser: [Space Demo](https://huggingface.co/spaces/username/model-demo)
When you create a demo, download the model from its Hub repository (not external sources like Google Drive). This cross-links artifacts and improves visibility
-
Quantized Versions: Consider uploading quantized versions (for example, GGUF)to improve accessibility for users with limited compute. Link these versions using the
base_modelmetadata field on the quantized model cards, and document performance differences.--- base_model: username/original-model base_model_relation: quantized ---

Model tree showing quantized versions -
Linking Datasets on the Model Page: Link datasets in your metadata so they appear directly on your model page.
--- datasets: - username/dataset - username/dataset-2 ---
-
New Model Version: If your model is an update of an existing one, specify it on the older model's card. This will display a banner on the older page linking to the update.
--- new_version: username/updated-model ---
-
Visual Examples: For image or video generation models, include examples directly on your model page using the
<Gallery>card component.<Gallery>   </Gallery>
-
Carbon Emissions: If possible, specify the carbon emissions from training.
---
co2_eq_emissions:
emissions: 123.45
source: "CodeCarbon"
training_type: "pre-training"
geographical_location: "US-East"
hardware_used: "8xA100 GPUs"
----
Visibility Settings: When ready to share your model, switch it to public in your model settings. Before doing so, double-check that all documentation and code examples to ensure they're accurate and complete
-
Gated Access: If your model needs controlled access, use the gated access feature and clearly state the conditions users must meet. This is important for models with dual-use concerns or commercial restrictions.
A successful model release extends beyond the initial publication. To maintain quality and maximize impact:
-
Verify Functionality: After release, test all code snippets in a clean environment to confirm they work as expected. This ensures users can run your model without errors or confusion.
For example, if your model is a
transformerscompatible LLM:from transformers import pipeline # This should run without errors pipe = pipeline("text-generation", model="your-username/your-model") result = pipe("Your test prompt") print(result)
-
Share Share Share: Most users discover models through social media, chat channels (like Slack or Discord), or newsletters. Share your model links in these spaces, and also add them to your website or GitHub repositories.
The more visits and likes your model receives, the higher it appears in the Hugging Face trending section, bringing even more visibility
-
Community Interaction: Use the Community tab to answer questions, address feedback, and resolve issues promptly. Clarify confusion, accept helpful suggestions, and close off-topic threads to keep discussions focused.
-
Usage Metrics: Track downloads and likes to understand your model’s reach and adoption. You can view total download metrics in your model’s settings.
-
Review Community Contributions: Regularly check your model’s repository for contributions from other users. Community pull requests and discussions can provide useful feedback, ideas, and opportunities for collaboration.
Hugging Face Enterprise subscription offers additional capabilities for teams and organizations:
-
Access Control: Set resource groups to manage access for specific teams or users. This ensures the right permissions and secure collaboration across your organization.
-
Storage Region: Choose the data storage region (US or EU) for your model files to meet regional data regulations and compliance requirements.
-
Advanced Analytics: Use Enterprise Analytics features to gain deeper insights into model usage patterns, downloads, and adoption trends across your organization.
-
Extended Storage: Access additional private storage capacity to host more models and larger artifacts as your model portfolio expands.
-
Organization Blog Posts: Enterprise organizations can now publish blog articles directly on Hugging Face. This lets you share model releases, research updates, and announcements with the broader community, all from your organization’s profile.
By following these guidelines and examples, you’ll make your model release on Hugging Face clear, useful, and impactful. This helps your work reach more people, strengthens the AI community, and increases your model’s visibility.
We can’t wait to see what you share next! 🤗


