Instructor is a Gleam library for structured prompting with Large Language Models. It converts LLM text outputs into validated data structures, enabling seamless integration between AI and traditional Gleam applications.
- Structured Prompting: Define response schemas and get validated structured data from LLMs
- Multiple LLM Providers: Support for OpenAI (GPT-5.2), Anthropic (Claude 4.5), Gemini (3), Groq, and Ollama
- Validation & Retry Logic: Automatic retry with error feedback when responses don't match schemas
- Streaming Support: Handle partial and array streaming responses
- Type Safe: Full Gleam type safety for LLM interactions
- gpt-5.2 - Latest GPT-5.2 model (Dec 2025) with Instant, Thinking, and Pro modes
- gpt-5.2-pro - GPT-5.2 Pro variant for advanced enterprise tasks
- gpt-5 - GPT-5 base model with 400K context, dynamic thinking mode
- gpt-4o - GPT-4 Omni model (legacy)
- gpt-4o-mini - Fast, cost-effective GPT-4 variant
- o1 - Advanced reasoning model
- claude-opus-4.5 - Most powerful Claude 4.5 model for complex coding, agentic tasks, and 1M token context
- claude-sonnet-4.5 - Balanced performance with enhanced coding and reasoning (recommended)
- claude-haiku-4.5 - Fast, efficient with near-frontier capabilities
- claude-3-5-sonnet-20241022 - Previous generation Claude 3.5 (legacy)
- claude-3-5-haiku-20241022 - Fast, efficient Claude 3.5 (legacy)
- gemini-3-pro - Most capable Gemini 3 for complex multimodal reasoning
- gemini-3-flash - High performance with cost efficiency (recommended)
- gemini-3-flash-lite - Lightweight, high-throughput variant
- gemini-2.5-pro - Previous generation Gemini 2.5 (legacy)
- gemini-2.5-flash - Previous generation Gemini 2.5 (legacy)
- llama-3.3-70b-versatile - Latest Llama 3.3
- llama-3.1-70b-versatile - Llama 3.1 70B
- mixtral-8x7b-32768 - Mixtral MoE model
- llama3.2 - Latest Llama 3.2
- qwen2.5 - Qwen 2.5 models
- mistral - Mistral models
- codex-mini-latest - Fast, 200K context, optimized for speed ($1.50/$6 per 1M tokens)
- gpt-5.2-codex - Full quality, 400K context, best reasoning
- gpt-5.2 - General purpose, 400K context
Authentication: Requires ~/.codex/auth.json (run: codex login)
Reasoning Effort: minimal, low, medium, high (thinking depth)
No Pay-Per-Token: Subscription-based via ChatGPT Plus/Pro only
import instructor
import instructor/types
// Create configuration
let config = instructor.default_config()
// Create a simple response model
let response_model = instructor.string_response_model("Extract the sentiment as positive, negative, or neutral")
// Make a chat completion
let messages = [instructor.user_message("I love Gleam programming!")]
case instructor.chat_completion(
config,
response_model,
messages,
None, // model (uses default)
None, // temperature
None, // max_tokens
None, // mode
None, // max_retries
None, // validation_context
) {
types.Success(result) -> io.println("Sentiment: " <> result)
types.ValidationError(errors) -> io.println("Validation failed: " <> string.join(errors, ", "))
types.AdapterError(error) -> io.println("API error: " <> error)
}Response models define the structure and validation for LLM outputs:
// Simple string response
let string_model = instructor.string_response_model("Description of the field")
// Integer response
let int_model = instructor.int_response_model("A number between 1-10")
// Boolean response
let bool_model = instructor.bool_response_model("True if positive sentiment")For complex domain models, use custom validators with business logic:
import instructor/validator
pub type Person {
Person(name: String, age: Int, email: String)
}
pub fn person_validator() -> validator.CustomValidator(Person) {
let decoder = person_decoder()
let validation = validator.compose_validators([
validate_name,
validate_age,
validate_email,
])
validator.custom_validator(decoder, validation)
}See examples/advanced_validators.gleam for complete examples.
Build sophisticated schemas with the schema builder:
import instructor/json_schema
let person_schema =
json_schema.object_builder()
|> json_schema.add_string_field("name", "Person's name", True)
|> json_schema.add_int_field("age", "Person's age", True)
|> json_schema.add_enum_field("status", "Status", ["active", "inactive"], True)
|> json_schema.build_object(Some("Person information"))
// With constraints
let score_schema = json_schema.float_with_range(
Some("Confidence score"),
Some(0.0),
Some(1.0)
)
// With pattern validation
let email_schema = json_schema.string_with_pattern(
Some("Email address"),
"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
)Create messages for conversation:
let messages = [
instructor.system_message("You are a helpful assistant."),
instructor.user_message("What is the capital of France?"),
]Different modes for LLM interaction:
Tools- OpenAI function calling (most reliable)Json- JSON modeJsonSchema- Structured outputs with schemaMdJson- JSON in markdown code blocks
Configure adapters in your application:
import instructor/types
import instructor/adapters/openai
import instructor/adapters/codex
// OpenAI (pay-per-token)
let openai_config = instructor.InstructorConfig(
adapter: openai.openai_adapter(),
default_model: "gpt-4o-mini",
default_max_retries: 3,
)
// Codex (ChatGPT OAuth - subscription only)
case codex.codex_config_from_file(Some("medium"), False) {
Ok(codex_auth) -> {
let codex_config = instructor.InstructorConfig(
adapter: codex.codex_adapter(),
default_model: "codex-mini-latest",
default_max_retries: 2,
)
// Use codex_config for completions
}
Error(msg) -> io.println("Codex auth failed: " <> msg)
}See examples/codex_usage.gleam for complete Codex examples including smart model selection and reasoning effort configuration.
This is a port of the Elixir Instructor library to Gleam. The current implementation includes:
- ✅ Core types and data structures
- ✅ JSON schema generation
- ✅ Validation using
gleam/dynamic/decode - ✅ Adapter pattern for multiple LLMs
- ✅ OpenAI, Anthropic, Gemini, Groq, Ollama, and Codex adapters
- ✅ HTTP client implementation
- ✅ Basic test suite
- ✅ Streaming support (partial and array streaming modes)
- ✅ Custom validators for complex domain models
- ✅ Advanced JSON schema generation with builder pattern
Add to your gleam.toml:
[dependencies]
gleam_stdlib = "~> 0.67"
gleam_http = "~> 4.3"
gleam_httpc = "~> 5.0"
gleam_json = "~> 3.1"MIT License - see LICENSE for details.
This library is inspired by the Instructor library for Python and its Elixir port. Special thanks to the Gleam community for their excellent language and tooling.