Skip to content

Releases: jamesrochabrun/SwiftOpenAI

SwiftOpenAI v3.8.0

12 Sep 18:37
Compare
Choose a tag to compare
Screenshot 2024-09-12 at 11 36 00 AM

Adding 01 models to Models list.

⚠️ Important :

Screenshot 2024-09-12 at 11 36 35 AM

Other:

Fixes #84

SwiftOpenAI v3.7.0

26 Aug 06:34
Compare
Choose a tag to compare

Fixing version issues.

What's Changed

Full Changelog: v3.6.2...v3.7

SwiftOpenAI v3.4.0

12 Aug 23:16
Compare
Choose a tag to compare
Screenshot 2024-08-12 at 4 15 39 PM

Structured Outputs

Documentation:

Must knowns:

  • All fields must be required , To use Structured Outputs, all fields or function parameters must be specified as required.

  • Although all fields must be required (and the model will return a value for each parameter), it is possible to emulate an optional parameter by using a union type with null.

  • Objects have limitations on nesting depth and size, A schema may have up to 100 object properties total, with up to 5 levels of nesting.

  • additionalProperties): false must always be set in objects
    additionalProperties controls whether it is allowable for an object to contain additional keys / values that were not defined in the JSON Schema.
    Structured Outputs only supports generating specified keys / values, so we require developers to set additionalProperties: false to opt into Structured Outputs.

  • Key ordering, When using Structured Outputs, outputs will be produced in the same order as the ordering of keys in the schema.

  • Recursive schemas are supported

How to use Structured Outputs in SwiftOpenAI

  1. Function calling: Structured Outputs via tools is available by setting strict: true within your function definition. This feature works with all models that support tools, including all models gpt-4-0613 and gpt-3.5-turbo-0613 and later. When Structured Outputs are enabled, model outputs will match the supplied tool definition.

Using this schema:

{
  "schema": {
    "type": "object",
    "properties": {
      "steps": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "explanation": {
              "type": "string"
            },
            "output": {
              "type": "string"
            }
          },
          "required": ["explanation", "output"],
          "additionalProperties": false
        }
      },
      "final_answer": {
        "type": "string"
      }
    },
    "required": ["steps", "final_answer"],
    "additionalProperties": false
  }
}

You can use the convenient JSONSchema object like this:

// 1: Define the Step schema object

let stepSchema = JSONSchema(
   type: .object,
   properties: [
      "explanation": JSONSchema(type: .string),
      "output": JSONSchema(
         type: .string)
   ],
   required: ["explanation", "output"],
   additionalProperties: false
)

// 2. Define the steps Array schema.

let stepsArraySchema = JSONSchema(type: .array, items: stepSchema)

// 3. Define the final Answer schema.

let finalAnswerSchema = JSONSchema(type: .string)

// 4. Define math reponse JSON schema.

let mathResponseSchema = JSONSchema(
      type: .object,
      properties: [
         "steps": stepsArraySchema,
         "final_answer": finalAnswerSchema
      ],
      required: ["steps", "final_answer"],
      additionalProperties: false
)

let tool = ChatCompletionParameters.Tool(
            function: .init(
               name: "math_response",
               strict: true,
               parameters: mathResponseSchema))
)

let prompt = "solve 8x + 31 = 2"
let systemMessage = ChatCompletionParameters.Message(role: .system, content: .text("You are a math tutor"))
let userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))
let parameters = ChatCompletionParameters(
   messages: [systemMessage, userMessage],
   model: .gpt4o20240806,
   tools: [tool])

let chat = try await service.startChat(parameters: parameters)
  1. A new option for the response_format parameter: developers can now supply a JSON Schema via json_schema, a new option for the response_format parameter. This is useful when the model is not calling a tool, but rather, responding to the user in a structured way. This feature works with our newest GPT-4o models: gpt-4o-2024-08-06, released today, and gpt-4o-mini-2024-07-18. When a response_format is supplied with strict: true, model outputs will match the supplied schema.

Using the previous schema, this is how you can implement it as json schema using the convenient JSONSchemaResponseFormat object:

// 1: Define the Step schema object

let stepSchema = JSONSchema(
   type: .object,
   properties: [
      "explanation": JSONSchema(type: .string),
      "output": JSONSchema(
         type: .string)
   ],
   required: ["explanation", "output"],
   additionalProperties: false
)

// 2. Define the steps Array schema.

let stepsArraySchema = JSONSchema(type: .array, items: stepSchema)

// 3. Define the final Answer schema.

let finalAnswerSchema = JSONSchema(type: .string)

// 4. Define the response format JSON schema.

let responseFormatSchema = JSONSchemaResponseFormat(
   name: "math_response",
   strict: true,
   schema: JSONSchema(
      type: .object,
      properties: [
         "steps": stepsArraySchema,
         "final_answer": finalAnswerSchema
      ],
      required: ["steps", "final_answer"],
      additionalProperties: false
   )
)

let prompt = "solve 8x + 31 = 2"
let systemMessage = ChatCompletionParameters.Message(role: .system, content: .text("You are a math tutor"))
let userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))
let parameters = ChatCompletionParameters(
   messages: [systemMessage, userMessage],
   model: .gpt4o20240806,
   responseFormat: .jsonSchema(responseFormatSchema))

SwiftOpenAI Structred outputs supports:

  • Tools Structured output.
  • Response format Structure output.
  • Recursive Schema.
  • Optional values Schema.
  • Pydantic models.

We don't support Pydantic models, users need tos manually create Schemas using JSONSchema or JSONSchemaResponseFormat objects.

Pro tip 🔥 Use iosAICodeAssistant GPT to construct SwifOpenAI schemas. Just paste your JSON schema and ask the GPT to create SwiftOpenAI schemas for tools and response format.

For more details visit the Demo project for tools and response format.

What's Changed Summary:

Full Changelog: v3.6.2...v3.4.0

SwiftOpenAI v3.6.2

03 Aug 07:15
e4b6405
Compare
Choose a tag to compare

What's Changed

Full Changelog: v3.6.1...v3.6.2

AIProxy bug fix

22 Jul 01:47
392d01e
Compare
Choose a tag to compare

If the developer does not provide an AIProxy serviceURL in their integration, we want to fall back to api.aiproxy.pro and not http://Lous-MacBook-Air-3.local:4000 😅
#64

Making debug prints optional.

20 Jul 04:15
Compare
Choose a tag to compare

Now consumers of the library can opt in to print events on DEBUG builds.

e.g:

let service = OpenAIServiceFactory.service(apiKey: YOUR_API_KEY, debugEnabled: true)

Support for GPT-4o mini: advancing cost-efficient intelligence

19 Jul 20:02
e6bb9a3
Compare
Choose a tag to compare

GPT-4o mini: advancing cost-efficient intelligence
Introducing our most cost-efficient small model

Screenshot 2024-07-19 at 1 00 36 PM

AIProxy updates.

26 Jun 22:04
6bc5fe0
Compare
Choose a tag to compare
  • The factory method OpenAIServiceFactory.ollama has been changed to OpenAIServiceFactory.service, where you specify the url of the service that is OpenAI-API compatible. To specify the URL and api key (for Bearer API authentication), use:
OpenAIServiceFactory.service(apiKey: "YOUR_API_KEY", baseURL: "http://<DOMAIN>:<PORT>")
  • The AIProxy integration now uses certificate pinning to prevent threat actors from snooping on your traffic. There are no changes to your client code necessary to take advantage of this security improvement

  • The AIProxy integration has a changed method for hiding the DeviceCheck bypass token. This token is only intended to be used on iOS simulators, and the previous method of hiding was too easy to leak into production builds of the app. Please change your integration code from:

#if DEBUG && targetEnvironment(simulator)
	OpenAIServiceFactory.service(
		aiproxyPartialKey: "hardcode-partial-key-here",
		aiproxyDeviceCheckBypass: "hardcode-device-check-bypass-here"
	)
#else
	OpenAIServiceFactory.service(aiproxyPartialKey: "hardcode-partial-key-here")
#endif

To this:

OpenAIServiceFactory.service(
   aiproxyPartialKey: "hardcode-partial-key-here"
)

And use the method described in the README for adding the bypass token as an env variable to your Xcode project.

Ollama OpenAI compatibility.

25 Jun 07:17
Compare
Choose a tag to compare

Ollama

Ollama now has built-in compatibility with the OpenAI Chat Completions API, making it possible to use more tooling and applications with Ollama locally.

Screenshot 2024-06-25 at 12 16 19 AM

⚠️ Important

Remember that these models run locally, so you need to download them. If you want to use llama3, you can open the terminal and run the following command:

ollama pull llama3

you can follow Ollama documentation for more.

How to use this models locally using SwiftOpenAI?

To use local models with an OpenAIService in your application, you need to provide a URL.

let service = OpenAIServiceFactory.ollama(baseURL: "http://localhost:11434")

Then you can use the completions API as follows:

let prompt = "Tell me a joke"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("llama3"))
let chatCompletionObject = service.startStreamedChat(parameters: parameters)

Resources:

Changelog Jun6th, 2024

21 Jun 05:42
Compare
Choose a tag to compare