Skip to content

Commit 4fe51cc

Browse files
authored
ci: skip provider tests without secrets (#1990)
1 parent 2900c25 commit 4fe51cc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+344
-144
lines changed

.github/workflows/test.yml

Lines changed: 130 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,8 @@ jobs:
4040
name: Core Provider Tests (OpenAI)
4141
runs-on: ubuntu-latest
4242
needs: core-tests
43+
env:
44+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
4345

4446
steps:
4547
- uses: actions/checkout@v2
@@ -51,8 +53,21 @@ jobs:
5153
run: uv python install 3.11
5254
- name: Install the project
5355
run: uv sync --all-extras
56+
- name: Skip core provider tests (OpenAI)
57+
if: ${{ env.OPENAI_API_KEY == '' }}
58+
run: echo "Skipping OpenAI core provider tests (missing OPENAI_API_KEY)."
5459
- name: Run core provider tests (OpenAI)
55-
run: uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "openai"
60+
if: ${{ env.OPENAI_API_KEY != '' }}
61+
run: |
62+
set +e
63+
uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "openai"
64+
status=$?
65+
set -e
66+
if [ $status -eq 5 ]; then
67+
echo "No tests collected; treating as success."
68+
exit 0
69+
fi
70+
exit $status
5671
env:
5772
INSTRUCTOR_ENV: CI
5873
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
@@ -62,6 +77,8 @@ jobs:
6277
name: Core Provider Tests (Anthropic)
6378
runs-on: ubuntu-latest
6479
needs: core-tests
80+
env:
81+
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
6582

6683
steps:
6784
- uses: actions/checkout@v2
@@ -73,8 +90,21 @@ jobs:
7390
run: uv python install 3.11
7491
- name: Install the project
7592
run: uv sync --all-extras
93+
- name: Skip core provider tests (Anthropic)
94+
if: ${{ env.ANTHROPIC_API_KEY == '' }}
95+
run: echo "Skipping Anthropic core provider tests (missing ANTHROPIC_API_KEY)."
7696
- name: Run core provider tests (Anthropic)
77-
run: uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "anthropic"
97+
if: ${{ env.ANTHROPIC_API_KEY != '' }}
98+
run: |
99+
set +e
100+
uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "anthropic"
101+
status=$?
102+
set -e
103+
if [ $status -eq 5 ]; then
104+
echo "No tests collected; treating as success."
105+
exit 0
106+
fi
107+
exit $status
78108
env:
79109
INSTRUCTOR_ENV: CI
80110
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -84,6 +114,9 @@ jobs:
84114
name: Core Provider Tests (Google)
85115
runs-on: ubuntu-latest
86116
needs: core-tests
117+
env:
118+
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
119+
GOOGLE_GENAI_MODEL: ${{ secrets.GOOGLE_GENAI_MODEL }}
87120

88121
steps:
89122
- uses: actions/checkout@v2
@@ -95,8 +128,21 @@ jobs:
95128
run: uv python install 3.11
96129
- name: Install the project
97130
run: uv sync --all-extras
131+
- name: Skip core provider tests (Google)
132+
if: ${{ env.GOOGLE_API_KEY == '' || env.GOOGLE_GENAI_MODEL == '' }}
133+
run: echo "Skipping Google core provider tests (missing GOOGLE_API_KEY or GOOGLE_GENAI_MODEL)."
98134
- name: Run core provider tests (Google)
99-
run: uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "google"
135+
if: ${{ env.GOOGLE_API_KEY != '' && env.GOOGLE_GENAI_MODEL != '' }}
136+
run: |
137+
set +e
138+
uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "google"
139+
status=$?
140+
set -e
141+
if [ $status -eq 5 ]; then
142+
echo "No tests collected; treating as success."
143+
exit 0
144+
fi
145+
exit $status
100146
env:
101147
INSTRUCTOR_ENV: CI
102148
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
@@ -106,6 +152,14 @@ jobs:
106152
name: Core Provider Tests (Other)
107153
runs-on: ubuntu-latest
108154
needs: core-tests
155+
env:
156+
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
157+
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
158+
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
159+
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
160+
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
161+
WRITER_API_KEY: ${{ secrets.WRITER_API_KEY }}
162+
PERPLEXITY_API_KEY: ${{ secrets.PERPLEXITY_API_KEY }}
109163

110164
steps:
111165
- uses: actions/checkout@v2
@@ -117,8 +171,29 @@ jobs:
117171
run: uv python install 3.11
118172
- name: Install the project
119173
run: uv sync --all-extras
174+
- name: Skip core provider tests (Other)
175+
if: >-
176+
${{ env.COHERE_API_KEY == '' && env.XAI_API_KEY == ''
177+
&& env.MISTRAL_API_KEY == '' && env.CEREBRAS_API_KEY == ''
178+
&& env.FIREWORKS_API_KEY == '' && env.WRITER_API_KEY == ''
179+
&& env.PERPLEXITY_API_KEY == '' }}
180+
run: echo "Skipping core provider tests (Other) (missing provider secrets)."
120181
- name: Run core provider tests (Cohere, xAI, Mistral, etc)
121-
run: uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "cohere or xai or mistral or cerebras or fireworks or writer or perplexity"
182+
if: >-
183+
${{ env.COHERE_API_KEY != '' || env.XAI_API_KEY != ''
184+
|| env.MISTRAL_API_KEY != '' || env.CEREBRAS_API_KEY != ''
185+
|| env.FIREWORKS_API_KEY != '' || env.WRITER_API_KEY != ''
186+
|| env.PERPLEXITY_API_KEY != '' }}
187+
run: |
188+
set +e
189+
uv run pytest tests/llm/test_core_providers -v --asyncio-mode=auto -n auto -k "cohere or xai or mistral or cerebras or fireworks or writer or perplexity"
190+
status=$?
191+
set -e
192+
if [ $status -eq 5 ]; then
193+
echo "No tests collected; treating as success."
194+
exit 0
195+
fi
196+
exit $status
122197
env:
123198
INSTRUCTOR_ENV: CI
124199
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
@@ -134,6 +209,9 @@ jobs:
134209
name: ${{ matrix.provider.name }} Tests
135210
runs-on: ubuntu-latest
136211
needs: [core-openai, core-anthropic, core-google, core-other]
212+
env:
213+
PROVIDER_API_KEY: ${{ secrets[matrix.provider.env_key] }}
214+
GOOGLE_GENAI_MODEL: ${{ secrets.GOOGLE_GENAI_MODEL }}
137215
strategy:
138216
fail-fast: false
139217
matrix:
@@ -167,8 +245,29 @@ jobs:
167245
run: uv python install 3.11
168246
- name: Install the project
169247
run: uv sync --all-extras
248+
- name: Skip ${{ matrix.provider.name }} tests
249+
if: >-
250+
${{ env.PROVIDER_API_KEY == '' ||
251+
((matrix.provider.name == 'Gemini' || matrix.provider.name == 'Google GenAI'
252+
|| matrix.provider.name == 'Vertex AI') && env.GOOGLE_GENAI_MODEL == '') }}
253+
run: >-
254+
echo "Skipping ${{ matrix.provider.name }} tests
255+
(missing ${{ matrix.provider.env_key }} or GOOGLE_GENAI_MODEL)."
170256
- name: Run ${{ matrix.provider.name }} tests
171-
run: uv run pytest ${{ matrix.provider.test_path }} --asyncio-mode=auto -n auto
257+
if: >-
258+
${{ env.PROVIDER_API_KEY != '' &&
259+
((matrix.provider.name != 'Gemini' && matrix.provider.name != 'Google GenAI'
260+
&& matrix.provider.name != 'Vertex AI') || env.GOOGLE_GENAI_MODEL != '') }}
261+
run: |
262+
set +e
263+
uv run pytest ${{ matrix.provider.test_path }} --asyncio-mode=auto -n auto
264+
status=$?
265+
set -e
266+
if [ $status -eq 5 ]; then
267+
echo "No tests collected; treating as success."
268+
exit 0
269+
fi
270+
exit $status
172271
env:
173272
INSTRUCTOR_ENV: CI
174273
${{ matrix.provider.env_key }}: ${{ secrets[matrix.provider.env_key] }}
@@ -178,6 +277,12 @@ jobs:
178277
name: Auto Client Tests
179278
runs-on: ubuntu-latest
180279
needs: [core-openai, core-anthropic, core-google, core-other]
280+
env:
281+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
282+
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
283+
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
284+
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
285+
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
181286

182287
steps:
183288
- uses: actions/checkout@v2
@@ -189,8 +294,27 @@ jobs:
189294
run: uv python install 3.11
190295
- name: Install the project
191296
run: uv sync --all-extras
297+
- name: Skip Auto Client tests
298+
if: >-
299+
${{ env.OPENAI_API_KEY == '' || env.GOOGLE_API_KEY == ''
300+
|| env.COHERE_API_KEY == '' || env.ANTHROPIC_API_KEY == ''
301+
|| env.XAI_API_KEY == '' }}
302+
run: echo "Skipping Auto Client tests (missing one or more provider secrets)."
192303
- name: Run Auto Client tests
193-
run: uv run pytest tests/test_auto_client.py --asyncio-mode=auto -n auto
304+
if: >-
305+
${{ env.OPENAI_API_KEY != '' && env.GOOGLE_API_KEY != ''
306+
&& env.COHERE_API_KEY != '' && env.ANTHROPIC_API_KEY != ''
307+
&& env.XAI_API_KEY != '' }}
308+
run: |
309+
set +e
310+
uv run pytest tests/test_auto_client.py --asyncio-mode=auto -n auto
311+
status=$?
312+
set -e
313+
if [ $status -eq 5 ]; then
314+
echo "No tests collected; treating as success."
315+
exit 0
316+
fi
317+
exit $status
194318
env:
195319
INSTRUCTOR_ENV: CI
196320
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

.github/workflows/ty.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,5 @@ jobs:
2525
run: uv sync --all-extras
2626
- name: Run type check with ty
2727
run: uv run ty check instructor/
28+
- name: Run type check with ty (tests)
29+
run: uv run ty check --config-file ty-tests.toml tests

AGENT.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
- Lint: `uv run ruff check instructor examples tests`
1010
- Format: `uv run ruff format instructor examples tests`
1111
- Build docs: `uv run mkdocs serve` (local) or `./build_mkdocs.sh` (production)
12+
- Waiting: use `sleep <seconds>` for explicit pauses (e.g., CI waits) or to let external processes finish
1213

1314
## Architecture
1415
- **Core**: `instructor/` - Pydantic-based structured outputs for LLMs

CLAUDE.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
1414
- Format: `uv run ruff format instructor examples tests`
1515
- Generate coverage: `uv run coverage run -m pytest tests/ -k "not docs"` then `uv run coverage report`
1616
- Build documentation: `uv run mkdocs serve` (for local preview) or `./build_mkdocs.sh` (for production)
17+
- Waiting: use `sleep <seconds>` for explicit pauses (e.g., CI waits) or to let external processes finish
1718

1819
## Installation & Setup
1920
- Fork the repository and clone your fork

docs/blog/posts/announcing-gemini-tool-calling-support.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ print(resp)
8282
#> name='Jason' age=25
8383
```
8484

85-
1. Current Gemini models that support tool calling are `gemini-1.5-flash-latest` and `gemini-1.5-pro-latest`.
85+
1. Current Gemini models that support tool calling are `gemini-3-flash` and `gemini-1.5-pro-latest`.
8686

8787
We can achieve a similar thing with the VertexAI SDK. For this to work, you'll need to authenticate to VertexAI.
8888

@@ -120,4 +120,4 @@ print(resp)
120120
#> name='Jason' age=25
121121
```
122122

123-
1. Current Gemini models that support tool calling are `gemini-1.5-flash-latest` and `gemini-1.5-pro-latest`.
123+
1. Current Gemini models that support tool calling are `gemini-3-flash` and `gemini-1.5-pro-latest`.

docs/blog/posts/google-openai-client.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ client = OpenAI(
3838
)
3939

4040
response = client.create(
41-
model="gemini-1.5-flash",
41+
model="gemini-3-flash",
4242
messages=[{"role": "user", "content": "Extract name and age from: John is 30"}],
4343
)
4444
```
@@ -118,7 +118,7 @@ class User(BaseModel):
118118

119119

120120
resp = client.create_iterable(
121-
model="gemini-1.5-flash",
121+
model="gemini-3-flash",
122122
messages=[
123123
{
124124
"role": "user",
@@ -166,7 +166,7 @@ class Story(BaseModel):
166166

167167

168168
resp = client.create_partial(
169-
model="gemini-1.5-flash",
169+
model="gemini-3-flash",
170170
messages=[
171171
{
172172
"role": "user",

docs/integrations/google.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ class User(BaseModel):
3636

3737
# Using from_provider (recommended)
3838
client = instructor.from_provider(
39-
"google/gemini-1.5-flash-latest",
39+
"google/gemini-3-flash",
4040
)
4141

4242
resp = client.create(
@@ -71,7 +71,7 @@ class User(BaseModel):
7171

7272
async def extract_user():
7373
client = instructor.from_provider(
74-
"google/gemini-1.5-flash-latest",
74+
"google/gemini-3-flash",
7575
async_client=True,
7676
)
7777

@@ -116,7 +116,7 @@ class User(BaseModel):
116116

117117

118118
client = instructor.from_provider(
119-
"google/gemini-1.5-flash-latest",
119+
"google/gemini-3-flash",
120120
mode=instructor.Mode.GENAI_STRUCTURED_OUTPUTS,
121121
)
122122

@@ -159,7 +159,7 @@ class User(BaseModel):
159159

160160

161161
client = instructor.from_provider(
162-
"google/gemini-1.5-flash-latest",
162+
"google/gemini-3-flash",
163163
)
164164

165165
user = client.create(
@@ -210,7 +210,7 @@ from pydantic import BaseModel
210210

211211

212212
client = instructor.from_provider(
213-
"google/gemini-1.5-flash-latest",
213+
"google/gemini-3-flash",
214214
)
215215

216216

@@ -245,7 +245,7 @@ from pydantic import BaseModel
245245

246246

247247
client = instructor.from_provider(
248-
"google/gemini-1.5-flash-latest",
248+
"google/gemini-3-flash",
249249
)
250250

251251

@@ -373,7 +373,7 @@ import instructor
373373

374374
# Option 1: Using from_provider
375375
client = instructor.from_provider(
376-
"vertexai/gemini-1.5-flash",
376+
"vertexai/gemini-3-flash",
377377
project="your-project",
378378
location="us-central1"
379379
)

docs/integrations/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,8 +142,8 @@ Use these example strings with `from_provider` to quickly get started:
142142
- [x] `instructor.from_provider("bedrock/anthropic.claude-3-sonnet-20240229-v1:0")`
143143
- [x] `instructor.from_provider("cerebras/llama3.1-70b")`
144144
- [x] `instructor.from_provider("fireworks/llama-v3-70b-instruct")`
145-
- [x] `instructor.from_provider("vertexai/gemini-1.5-flash")`
146-
- [x] `instructor.from_provider("genai/gemini-1.5-flash")`
145+
- [x] `instructor.from_provider("vertexai/gemini-3-flash")`
146+
- [x] `instructor.from_provider("genai/gemini-3-flash")`
147147
- [x] `instructor.from_provider("ollama/llama3")`
148148

149149
### 2. Manual Client Setup

docs/integrations/vertex.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ class User(BaseModel):
3939

4040
# Using from_provider (recommended)
4141
client = instructor.from_provider(
42-
"vertexai/gemini-1.5-flash",
42+
"vertexai/gemini-3-flash",
4343
)
4444

4545
resp = client.create(
@@ -253,7 +253,7 @@ import instructor
253253

254254
# Option 1: Using from_provider (simplest)
255255
client = instructor.from_provider(
256-
"vertexai/gemini-1.5-flash",
256+
"vertexai/gemini-3-flash",
257257
project="your-project", # Optional if set in environment
258258
location="us-central1" # Optional, defaults to us-central1
259259
)
@@ -267,7 +267,7 @@ client = from_genai(
267267
vertexai=True,
268268
project="your-project",
269269
location="us-central1",
270-
model="gemini-1.5-flash"
270+
model="gemini-3-flash"
271271
)
272272
)
273273
```

0 commit comments

Comments
 (0)