Skip to content

azure-data-ai-hub/microsoft-foundry-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

microsoft-foundry-local

Minimal example project showing how to use Foundry Local from Python via the foundry-local-sdk, and then send requests to the locally-hosted model using the OpenAI Python client.

Install Microsoft Foundry Local

Official docs: https://learn.microsoft.com/en-us/azure/foundry-local/get-started

Windows (WinGet)

winget install Microsoft.FoundryLocal

macOS (Homebrew, Apple Silicon)

brew tap microsoft/foundrylocal
brew install foundrylocal

Verify installation

foundry --version

Repository contents

  • main.py — Downloads and loads a model with FoundryLocalManager, then sends a sample chat completion request to the local Foundry endpoint using openai.OpenAI.
  • requirements.txt — Python dependencies for running the sample (foundry-local-sdk, openai).
  • LICENSE — License information.
  • .gitignore — Git ignore rules for common Python artifacts.

Prerequisites

  • Python 3.9+

Setup

python -m venv .venv
# Windows
.\.venv\Scripts\activate
# macOS/Linux
# source .venv/bin/activate

pip install -r requirements.txt

Run

python main.py

The script will:

  1. Download the model alias qwen2.5-0.5b (if needed)
  2. Load the model
  3. Create an OpenAI client pointing at the local Foundry endpoint
  4. Send a sample chat completion request
  5. Unload the model

Notes

  • To use a different model, change model_alias in main.py.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages