Minimal example project showing how to use Foundry Local from Python via the foundry-local-sdk, and then send requests to the locally-hosted model using the OpenAI Python client.
Official docs: https://learn.microsoft.com/en-us/azure/foundry-local/get-started
winget install Microsoft.FoundryLocalbrew tap microsoft/foundrylocal
brew install foundrylocalfoundry --versionmain.py— Downloads and loads a model withFoundryLocalManager, then sends a sample chat completion request to the local Foundry endpoint usingopenai.OpenAI.requirements.txt— Python dependencies for running the sample (foundry-local-sdk,openai).LICENSE— License information..gitignore— Git ignore rules for common Python artifacts.
- Python 3.9+
python -m venv .venv
# Windows
.\.venv\Scripts\activate
# macOS/Linux
# source .venv/bin/activate
pip install -r requirements.txtpython main.pyThe script will:
- Download the model alias
qwen2.5-0.5b(if needed) - Load the model
- Create an OpenAI client pointing at the local Foundry endpoint
- Send a sample chat completion request
- Unload the model
- To use a different model, change
model_aliasinmain.py.