Open
Conversation
- Add ModeRegistry for O(1) handler lookups via (Provider, Mode) tuples - Add ModeHandler base class and protocol interfaces - Add patch_v2() function for unified provider patching - Add registry-based retry logic with handler integration - Add exception hierarchy (RegistryError, ValidationContextError) - Add mode normalization with deprecation warnings - Add @register_mode_handler decorator for handler registration - Add registry unit tests This PR was written by [Cursor](https://cursor.com)
- Remove debug logging blocks in retry.py that wrote to hardcoded local path - Fix GENAI_STRUCTURED_OUTPUTS enum value to avoid alias collision - Fix sync retry to extract stream parameter from kwargs like async version
- Add docs/concepts/mode-migration.md explaining legacy mode deprecation - Add tests/v2/test_mode_normalization.py for mode normalization logic - Update mkdocs.yml with mode migration guide link - Tests skip gracefully when handlers not yet registered This PR was written by [Cursor](https://cursor.com)
- Fix tautological test assertion to verify handler exists - Use provider-specific deprecated modes in warning test
- Add instructor/v2/providers/anthropic/ with handlers for TOOLS, JSON_SCHEMA, PARALLEL_TOOLS, ANTHROPIC_REASONING_TOOLS modes - Add instructor/v2/providers/openai/ with handlers for TOOLS, JSON_SCHEMA, MD_JSON, PARALLEL_TOOLS, RESPONSES_TOOLS modes - Update instructor/v2/__init__.py with from_anthropic and from_openai exports - Update instructor/auto_client.py with v2 routing integration - Add tests/v2/test_provider_modes.py for integration tests - Add tests/v2/test_handlers_parametrized.py for unit tests - Add tests/v2/test_openai_streaming.py for streaming tests This PR was written by [Cursor](https://cursor.com)
- Remove debug logging in auto_client.py for Cohere client - Fix google provider to use v1 from_genai (v2 not available yet) - Add empty check for text_blocks in Anthropic MD_JSON handler - Add None check for tool_calls in OpenAI PARALLEL_TOOLS handler
- Add instructor/v2/providers/genai/ with handlers for TOOLS, JSON modes - Add instructor/v2/providers/cohere/ with handlers for TOOLS, JSON_SCHEMA, MD_JSON modes - Add instructor/v2/providers/mistral/ with handlers for TOOLS, JSON_SCHEMA, MD_JSON modes - Update instructor/v2/__init__.py with from_genai, from_cohere, from_mistral exports - Add tests/v2/test_genai_integration.py - Add tests/v2/test_cohere_handlers.py - Add tests/v2/test_mistral_client.py and test_mistral_handlers.py This PR was written by [Cursor](https://cursor.com)
- Remove debug logging in Cohere client - Fix shallow copy mutation in Cohere handlers (copy messages list) - Add empty list check in Mistral MD_JSON handler
…iter, Bedrock) - Add instructor/v2/providers/xai/ with handlers for TOOLS, JSON_SCHEMA, MD_JSON modes - Add instructor/v2/providers/groq/ with handlers for TOOLS, MD_JSON modes - Add instructor/v2/providers/fireworks/ with handlers for TOOLS, MD_JSON modes - Add instructor/v2/providers/cerebras/ with handlers for TOOLS, MD_JSON modes - Add instructor/v2/providers/writer/ with handlers for TOOLS, MD_JSON modes - Add instructor/v2/providers/bedrock/ with handlers for TOOLS, MD_JSON modes - Update instructor/v2/__init__.py with all provider exports - Add provider-specific test files for all 6 providers All 11 v2 providers are now implemented. This PR was written by [Cursor](https://cursor.com)
- Fix Bedrock MD_JSON handler to return early for None response_model - Fix Fireworks async streaming to await the coroutine - Fix xAI async streaming filter to only check for tool_calls - Add fallback error handling for xAI sync streaming - Add list content case to xAI MD_JSON handler - Add truncated output detection to Writer handlers
Test reorganization: - Move cache tests to tests/cache/ - Move core tests to tests/core/ (exceptions, patch, retry, schema) - Move multimodal tests to tests/multimodal/ - Move processing tests to tests/processing/ - Move provider tests to tests/providers/ - Remove obsolete/duplicate test files Unified test infrastructure: - Add tests/v2/test_client_unified.py - Parametrized tests for all provider clients - Add tests/v2/test_handler_registration_unified.py - Handler registration validation - Add tests/v2/test_routing.py - Provider routing tests - Add tests/v2/README.md - Test documentation - Add tests/v2/UNIFICATION_OPPORTUNITIES.md - Future consolidation notes This PR was written by [Cursor](https://cursor.com)
The test expects a deprecation warning that hasn't been added to v1 from_anthropic yet
Documentation: - Add instructor/v2/README.md with comprehensive architecture documentation - Update docs/modes-comparison.md with v2 mode mappings - Update docs/integrations/anthropic.md, genai.md, google.md, bedrock.md - Update docs/concepts/from_provider.md with v2 routing - Update docs/api.md with v2 exports - Update CLAUDE.md with v2 development notes Code cleanup: - Update pyproject.toml version - Update .github/workflows/test.yml - Minor fixes in instructor/core/, dsl/, processing/, providers/ - Remove obsolete plan/seo_plan.md This PR was written by [Cursor](https://cursor.com)
Remove debug logging blocks that write to hardcoded local path
Fix non-deterministic test collection by sorting providers before parameterization
Remove debug logging blocks that write to hardcoded local path in prepare_request and parse_response methods
- Pass stream_extractor to Partial/Iterable streaming helpers (keep legacy fallback) - Remove stray no-op import in vertexai shim - Restore useful type info in openai_schema TypeError
Remove redundant handler files and register these providers directly via OPENAI_COMPAT_PROVIDERS list. These providers use OpenAI-compatible APIs, so they can share the same handler implementations. - Add GROQ, FIREWORKS, CEREBRAS to OPENAI_COMPAT_PROVIDERS - Update client imports to use OpenAI handlers module - Update _HANDLER_SPECS to point to OpenAI handlers - Remove redundant handler files (groq, fireworks, cerebras) - Update registry to remove deleted handler modules
- Introduced `_parse_with_registry` to centralize parsing logic and handle deprecation warnings. - Updated `ResponseSchema` methods for parsing Anthropic tools, JSON, OpenAI functions, and tools to use the new method. - Deprecated `ResponseSchema.parse_*` methods in favor of `process_response` and `ResponseSchema.from_response` with core modes. - Updated documentation to reflect the deprecation of legacy `ResponseSchema.parse_*` helpers.
…egistry # Conflicts: # pyproject.toml # uv.lock
Move v2 provider helper logic into handler modules so request/parse behavior lives next to each mode handler and remove redundant utils modules.
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
instructor | c8689a0 | Feb 06 2026, 05:21 PM |
# Conflicts: # instructor/providers/xai/client.py # instructor/providers/xai/utils.py # requirements.txt
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the runtime/provider implementation changes from the mode-registry unification effort, with docs/examples moved to a separate PR for simpler review.
Companion PR
Why
What is included
instructor/core/**,instructor/v2/**, and provider clients/handlersfrom_providerand provider resolution changes in runtime codetests/docs/**What is intentionally excluded
docs/**,examples/**, and docs tooling/test grouping updates (moved to companion PR)Validation
main.Note
High Risk
Touches core request/response patching, retry behavior, and provider client construction across many integrations, so regressions could affect structured output parsing and mode compatibility at runtime.
Overview
Shifts the runtime structured-output pipeline to a registry-driven v2 routing model:
core.patch,core.retry, andcore.client.from_openainow delegate to v2 registry handlers (with provider-aware mode normalization/validation) instead of legacy per-mode branching.Updates
from_providerto preferentially construct v2 provider clients (OpenAI-compatible providers like Databricks/DeepSeek/OpenRouter, plus Anthropic/GenAI/Cohere/Mistral/Bedrock/VertexAI/xAI/etc.), standardizing default modes (oftenMode.TOOLS/Mode.MD_JSON) and adding a new Vertex AI initialization path. Public exports are adjusted to introduceResponseSchema/response_schemaand reduce optional provider re-exports, while streaming DSL helpers are refactored to accept injected stream extractors/parsers rather than embedding provider/mode-specific chunk logic.Also includes CI/ops and typing cleanups: GitHub workflows switch type checking from
pyrighttoty, addMISTRAL_API_KEYto CI env, tighten batch/cache typing, and remove obsolete agent/cursor guidance files.Written by Cursor Bugbot for commit 0a65bcd. Configure here.