Releases: jamesrochabrun/SwiftOpenAI
SwiftOpenAI v3.8.0
SwiftOpenAI v3.7.0
Fixing version issues.
What's Changed
- Remove 'some' from factory return type by @lzell in #73
- Touch up the AIProxy section of README by @lzell in #72
- Fixup sample project after removing 'some' from factory methods by @lzell in #74
- Fix for chat stream demo with function call. by @jamesrochabrun in #77
- Adding support for JsonSchema as response format. by @jamesrochabrun in #78
- Demo for structured outputs in response format. by @jamesrochabrun in #79
- Support for Structure outputs in tools by @jamesrochabrun in #80
- Updating READ.me by @jamesrochabrun in #81
Full Changelog: v3.6.2...v3.7
SwiftOpenAI v3.4.0

Structured Outputs
Documentation:
Must knowns:
-
All fields must be required , To use Structured Outputs, all fields or function parameters must be specified as required.
-
Although all fields must be required (and the model will return a value for each parameter), it is possible to emulate an optional parameter by using a union type with null.
-
Objects have limitations on nesting depth and size, A schema may have up to 100 object properties total, with up to 5 levels of nesting.
-
additionalProperties): false must always be set in objects
additionalProperties controls whether it is allowable for an object to contain additional keys / values that were not defined in the JSON Schema.
Structured Outputs only supports generating specified keys / values, so we require developers to set additionalProperties: false to opt into Structured Outputs. -
Key ordering, When using Structured Outputs, outputs will be produced in the same order as the ordering of keys in the schema.
How to use Structured Outputs in SwiftOpenAI
- Function calling: Structured Outputs via tools is available by setting strict: true within your function definition. This feature works with all models that support tools, including all models gpt-4-0613 and gpt-3.5-turbo-0613 and later. When Structured Outputs are enabled, model outputs will match the supplied tool definition.
Using this schema:
{
"schema": {
"type": "object",
"properties": {
"steps": {
"type": "array",
"items": {
"type": "object",
"properties": {
"explanation": {
"type": "string"
},
"output": {
"type": "string"
}
},
"required": ["explanation", "output"],
"additionalProperties": false
}
},
"final_answer": {
"type": "string"
}
},
"required": ["steps", "final_answer"],
"additionalProperties": false
}
}
You can use the convenient JSONSchema
object like this:
// 1: Define the Step schema object
let stepSchema = JSONSchema(
type: .object,
properties: [
"explanation": JSONSchema(type: .string),
"output": JSONSchema(
type: .string)
],
required: ["explanation", "output"],
additionalProperties: false
)
// 2. Define the steps Array schema.
let stepsArraySchema = JSONSchema(type: .array, items: stepSchema)
// 3. Define the final Answer schema.
let finalAnswerSchema = JSONSchema(type: .string)
// 4. Define math reponse JSON schema.
let mathResponseSchema = JSONSchema(
type: .object,
properties: [
"steps": stepsArraySchema,
"final_answer": finalAnswerSchema
],
required: ["steps", "final_answer"],
additionalProperties: false
)
let tool = ChatCompletionParameters.Tool(
function: .init(
name: "math_response",
strict: true,
parameters: mathResponseSchema))
)
let prompt = "solve 8x + 31 = 2"
let systemMessage = ChatCompletionParameters.Message(role: .system, content: .text("You are a math tutor"))
let userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))
let parameters = ChatCompletionParameters(
messages: [systemMessage, userMessage],
model: .gpt4o20240806,
tools: [tool])
let chat = try await service.startChat(parameters: parameters)
- A new option for the
response_format
parameter: developers can now supply a JSON Schema viajson_schema
, a new option for the response_format parameter. This is useful when the model is not calling a tool, but rather, responding to the user in a structured way. This feature works with our newest GPT-4o models:gpt-4o-2024-08-06
, released today, andgpt-4o-mini-2024-07-18
. When a response_format is supplied with strict: true, model outputs will match the supplied schema.
Using the previous schema, this is how you can implement it as json schema using the convenient JSONSchemaResponseFormat
object:
// 1: Define the Step schema object
let stepSchema = JSONSchema(
type: .object,
properties: [
"explanation": JSONSchema(type: .string),
"output": JSONSchema(
type: .string)
],
required: ["explanation", "output"],
additionalProperties: false
)
// 2. Define the steps Array schema.
let stepsArraySchema = JSONSchema(type: .array, items: stepSchema)
// 3. Define the final Answer schema.
let finalAnswerSchema = JSONSchema(type: .string)
// 4. Define the response format JSON schema.
let responseFormatSchema = JSONSchemaResponseFormat(
name: "math_response",
strict: true,
schema: JSONSchema(
type: .object,
properties: [
"steps": stepsArraySchema,
"final_answer": finalAnswerSchema
],
required: ["steps", "final_answer"],
additionalProperties: false
)
)
let prompt = "solve 8x + 31 = 2"
let systemMessage = ChatCompletionParameters.Message(role: .system, content: .text("You are a math tutor"))
let userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))
let parameters = ChatCompletionParameters(
messages: [systemMessage, userMessage],
model: .gpt4o20240806,
responseFormat: .jsonSchema(responseFormatSchema))
SwiftOpenAI Structred outputs supports:
- Tools Structured output.
- Response format Structure output.
- Recursive Schema.
- Optional values Schema.
- Pydantic models.
We don't support Pydantic models, users need tos manually create Schemas using JSONSchema
or JSONSchemaResponseFormat
objects.
Pro tip 🔥 Use iosAICodeAssistant GPT to construct SwifOpenAI schemas. Just paste your JSON schema and ask the GPT to create SwiftOpenAI schemas for tools and response format.
For more details visit the Demo project for tools and response format.
What's Changed Summary:
- Remove 'some' from factory return type by @lzell in #73
- Touch up the AIProxy section of README by @lzell in #72
- Fixup sample project after removing 'some' from factory methods by @lzell in #74
- Fix for chat stream demo with function call. by @jamesrochabrun in #77
- Adding support for JsonSchema as response format. by @jamesrochabrun in #78
- Demo for structured outputs in response format. by @jamesrochabrun in #79
- Support for Structure outputs in tools by @jamesrochabrun in #80
- Updating READ.me by @jamesrochabrun in #81
Full Changelog: v3.6.2...v3.4.0
SwiftOpenAI v3.6.2
What's Changed
- Fix for azure stream options by @jamesrochabrun in #68
- Not working createThreadAndRun / createThreadAndRunStream #66 by @jamesrochabrun in #69
- Adding a small hint in demo by @jamesrochabrun in #70
Full Changelog: v3.6.1...v3.6.2
AIProxy bug fix
If the developer does not provide an AIProxy serviceURL in their integration, we want to fall back to api.aiproxy.pro and not http://Lous-MacBook-Air-3.local:4000 😅
#64
Making debug prints optional.
Now consumers of the library can opt in to print events on DEBUG builds.
e.g:
let service = OpenAIServiceFactory.service(apiKey: YOUR_API_KEY, debugEnabled: true)
Support for GPT-4o mini: advancing cost-efficient intelligence
AIProxy updates.
- The factory method
OpenAIServiceFactory.ollama
has been changed toOpenAIServiceFactory.service
, where you specify the url of the service that is OpenAI-API compatible. To specify the URL and api key (for Bearer API authentication), use:
OpenAIServiceFactory.service(apiKey: "YOUR_API_KEY", baseURL: "http://<DOMAIN>:<PORT>")
-
The AIProxy integration now uses certificate pinning to prevent threat actors from snooping on your traffic. There are no changes to your client code necessary to take advantage of this security improvement
-
The AIProxy integration has a changed method for hiding the DeviceCheck bypass token. This token is only intended to be used on iOS simulators, and the previous method of hiding was too easy to leak into production builds of the app. Please change your integration code from:
#if DEBUG && targetEnvironment(simulator)
OpenAIServiceFactory.service(
aiproxyPartialKey: "hardcode-partial-key-here",
aiproxyDeviceCheckBypass: "hardcode-device-check-bypass-here"
)
#else
OpenAIServiceFactory.service(aiproxyPartialKey: "hardcode-partial-key-here")
#endif
To this:
OpenAIServiceFactory.service(
aiproxyPartialKey: "hardcode-partial-key-here"
)
And use the method described in the README for adding the bypass token as an env variable to your Xcode project.
Ollama OpenAI compatibility.
Ollama
Ollama now has built-in compatibility with the OpenAI Chat Completions API, making it possible to use more tooling and applications with Ollama locally.

Remember that these models run locally, so you need to download them. If you want to use llama3, you can open the terminal and run the following command:
ollama pull llama3
you can follow Ollama documentation for more.
How to use this models locally using SwiftOpenAI?
To use local models with an OpenAIService in your application, you need to provide a URL.
let service = OpenAIServiceFactory.ollama(baseURL: "http://localhost:11434")
Then you can use the completions API as follows:
let prompt = "Tell me a joke"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("llama3"))
let chatCompletionObject = service.startStreamedChat(parameters: parameters)
Resources: