Skip to content

Releases: jamesrochabrun/SwiftOpenAI

SwiftOpenAI v4.0.7

14 Apr 06:36
3e81448
Compare
Choose a tag to compare

What's Changed

Full Changelog: v4.0.6...v4.0.7

SwiftOpenAI v4.0.6

17 Mar 07:08
3f5e195
Compare
Choose a tag to compare

Adding convenient property in ResponseModel addressing #129

   /// Convenience property that aggregates all text output from output_text items in the output array.
   /// Similar to the outputText property in Python and JavaScript SDKs.
   public var outputText: String? {
      let outputTextItems = output.compactMap { outputItem -> String? in
         switch outputItem {
         case .message(let message):
            return message.content.compactMap { contentItem -> String? in
               switch contentItem {
               case .outputText(let outputText):
                  return outputText.text
               }
            }.joined()
         default:
            return nil
         }
      }
      
      return outputTextItems.isEmpty ? nil : outputTextItems.joined()
   }

SwiftOpenAI v4.0.5

16 Mar 07:24
0a2a810
Compare
Choose a tag to compare

Support for NON Stream Response API

Response

OpenAI's most advanced interface for generating model responses. Supports text and image inputs, and text outputs. Create stateful interactions with the model, using the output of previous responses as input. Extend the model's capabilities with built-in tools for file search, web search, computer use, and more. Allow the model access to external systems and data using function calling.

Related guides:

Parameters

/// [Creates a model response.](https://platform.openai.com/docs/api-reference/responses/create)
public struct ModelResponseParameter: Codable {

   /// Text, image, or file inputs to the model, used to generate a response.
   /// A text input to the model, equivalent to a text input with the user role.
   /// A list of one or many input items to the model, containing different content types.
   public var input: InputType

   /// Model ID used to generate the response, like gpt-4o or o1. OpenAI offers a wide range of models with
   /// different capabilities, performance characteristics, and price points.
   /// Refer to the model guide to browse and compare available models.
   public var model: String

   /// Specify additional output data to include in the model response. Currently supported values are:
   /// file_search_call.results : Include the search results of the file search tool call.
   /// message.input_image.image_url : Include image urls from the input message.
   /// computer_call_output.output.image_url : Include image urls from the computer call output.
   public var include: [String]?

   /// Inserts a system (or developer) message as the first item in the model's context.
   /// When using along with previous_response_id, the instructions from a previous response will be not be
   /// carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
   public var instructions: String?

   /// An upper bound for the number of tokens that can be generated for a response, including visible output tokens
   /// and reasoning tokens.
   public var maxOutputTokens: Int?

   /// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information
   /// about the object in a structured format, and querying for objects via API or the dashboard.
   /// Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
   public var metadata: [String: String]?

   /// Whether to allow the model to run tool calls in parallel.
   /// Defaults to true
   public var parallelToolCalls: Bool?

   /// The unique ID of the previous response to the model. Use this to create multi-turn conversations.
   /// Learn more about conversation state.
   public var previousResponseId: String?

   /// o-series models only
   /// Configuration options for reasoning models.
   public var reasoning: Reasoning?

   /// Whether to store the generated model response for later retrieval via API.
   /// Defaults to true
   public var store: Bool?

   /// If set to true, the model response data will be streamed to the client as it is generated using server-sent events.
   public var stream: Bool?

   /// What sampling temperature to use, between 0 and 2.
   /// Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
   /// We generally recommend altering this or top_p but not both.
   /// Defaults to 1
   public var temperature: Double?

   /// Configuration options for a text response from the model. Can be plain text or structured JSON data.
   public var text: TextConfiguration?

   /// How the model should select which tool (or tools) to use when generating a response.
   /// See the tools parameter to see how to specify which tools the model can call.
   public var toolChoice: ToolChoiceMode?

   /// An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.
   public var tools: [Tool]?

   /// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
   /// So 0.1 means only the tokens comprising the top 10% probability mass are considered.
   /// We generally recommend altering this or temperature but not both.
   /// Defaults to 1
   public var topP: Double?

   /// The truncation strategy to use for the model response.
   /// Defaults to disabled
   public var truncation: String?

   /// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
   public var user: String?
}

The Response object

/// The Response object returned when retrieving a model response
public struct ResponseModel: Decodable {

   /// Unix timestamp (in seconds) of when this Response was created.
   public let createdAt: Int

   /// An error object returned when the model fails to generate a Response.
   public let error: ErrorObject?

   /// Unique identifier for this Response.
   public let id: String

   /// Details about why the response is incomplete.
   public let incompleteDetails: IncompleteDetails?

   /// Inserts a system (or developer) message as the first item in the model's context.
   public let instructions: String?

   /// An upper bound for the number of tokens that can be generated for a response, including visible output tokens
   /// and reasoning tokens.
   public let maxOutputTokens: Int?

   /// Set of 16 key-value pairs that can be attached to an object.
   public let metadata: [String: String]

   /// Model ID used to generate the response, like gpt-4o or o1.
   public let model: String

   /// The object type of this resource - always set to response.
   public let object: String

   /// An array of content items generated by the model.
   public let output: [OutputItem]

   /// Whether to allow the model to run tool calls in parallel.
   public let parallelToolCalls: Bool

   /// The unique ID of the previous response to the model. Use this to create multi-turn conversations.
   public let previousResponseId: String?

   /// Configuration options for reasoning models.
   public let reasoning: Reasoning?

   /// The status of the response generation. One of completed, failed, in_progress, or incomplete.
   public let status: String

   /// What sampling temperature to use, between 0 and 2.
   public let temperature: Double?

   /// Configuration options for a text response from the model.
   public let text: TextConfiguration

   /// How the model should select which tool (or tools) to use when generating a response.
   public let toolChoice: ToolChoiceMode

   /// An array of tools the model may call while generating a response.
   public let tools: [Tool]

   /// An alternative to sampling with temperature, called nucleus sampling.
   public let topP: Double?

   /// The truncation strategy to use for the model response.
   public let truncation: String?

   /// Represents token usage details.
   public let usage: Usage?

   /// A unique identifier representing your end-user.
   public let user: String?
}

Usage

Simple text input

let prompt = "What is the capital of France?"
let parameters = ModelResponseParameter(input: .string(prompt), model: .gpt4o)
let response = try await service.responseCreate(parameters)

Text input with reasoning

let prompt = "How much wood would a woodchuck chuck?"
let parameters = ModelResponseParameter(
    input: .string(prompt),
    model: .o3Mini,
    reasoning: Reasoning(effort: "high")
)
let response = try await service.responseCreate(parameters)

Image input

let textPrompt = "What is in this image?"
let imageUrl = "https://example.com/path/to/image.jpg"
let imageContent = ContentItem.imageUrl(ImageUrlContent(imageUrl: imageUrl))
let textContent = ContentItem.text(TextContent(text: textPrompt))
let message = InputItem(role: "user", content: [textContent, imageContent])
let parameters = ModelResponseParameter(input: .array([message]), model: .gpt4o)
let response = try await service.responseCreate(parameters)

Using tools (web search)

let prompt = "What was a positive news story from today?"
let parameters = ModelResponseParameter(
    input: .string(prompt),
    model: .gpt4o,
    tools: [Tool(type: "web_search_preview", function: nil)]
)
let response = try await service.responseCreate(parameters)

Using tools (file search)

let prompt = "What are the key points in the document?"
let parameters = ModelResponseParameter(
    input: .string(prompt),
    model: .gpt4o,
    tools: [
        Tool(
            type: "file_search",
            function: ChatCompletionParameters.ChatFunction(
                name: "file_search",
                strict: false,
                description: "Search through files",
                parameters: JSONSchema(
                    type: .ob...
Read more

SwiftOpenAI v4.0.4

10 Mar 22:09
75b8f09
Compare
Choose a tag to compare

What's Changed

Full Changelog: v4.0.3...v4.0.4

SwiftOpenAI v4.0.3

12 Feb 05:43
8bb0ffc
Compare
Choose a tag to compare

What's Changed

Following suggestions from API's that handles many providers, SwiftOpenAI makes now all properties for completions optional. This will make this library more friendly to different providers.

_> Decodables should all have optional properties. Why? We don't want to fail decoding in live apps if the provider changes something out from under us (which can happen purposefully due to deprecations, or by accident due to regressions). If we use non-optionals in decodable definitions, then a provider removing a field, changing the type of a field, or removing an enum case would cause decoding to fail.

You may think this isn't too bad, since the JSONDecoder throws anyway, and therefore client code will already be wrapped in a do/catch. However, we always want to give the best chance that decodable succeeds for the properties that the client actually uses. That is, if the provider changes out the enum case of a property unused by the client, we want the client application to continue functioning correctly, not to throw an error and enter the catch branch of the client's call site._

Full Changelog: v4.0.2...v4.0.3

SwiftOpenAI v4.0.2

04 Feb 05:25
15cd189
Compare
Choose a tag to compare

What's Changed

Full Changelog: v4.0.1...v4.0.2

SwiftOpenAI v4.0.1

02 Feb 07:12
e589864
Compare
Choose a tag to compare

SwiftOpenAI v4.0.0

What's Changed

Full Changelog: v4.0.0...v4.0.1

SwiftOpenAI v4.0.0

02 Feb 07:02
d72e7a7
Compare
Choose a tag to compare

DeepSeek

Image

The DeepSeek API uses an API format compatible with OpenAI. By modifying the configuration, you can use SwiftOpenAI to access the DeepSeek API.

Creating the service

let apiKey = "your_api_key"
let service = OpenAIServiceFactory.service(
   apiKey: apiKey,
   overrideBaseURL: "https://api.deepseek.com")

Non-Streaming Example

let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(
    messages: [.init(role: .user, content: .text(prompt))],
    model: .custom("deepseek-reasoner")
)

do {
    let result = try await service.chat(parameters: parameters)
    
    // Access the response content
    if let content = result.choices.first?.message.content {
        print("Response: \(content)")
    }
    
    // Access reasoning content if available
    if let reasoning = result.choices.first?.message.reasoningContent {
        print("Reasoning: \(reasoning)")
    }
} catch {
    print("Error: \(error)")
}

Streaming Example

let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(
    messages: [.init(role: .user, content: .text(prompt))],
    model: .custom("deepseek-reasoner")
)

// Start the stream
do {
    let stream = try await service.startStreamedChat(parameters: parameters)
    for try await result in stream {
        let content = result.choices.first?.delta.content ?? ""
        self.message += content
        
        // Optional: Handle reasoning content if available
        if let reasoning = result.choices.first?.delta.reasoningContent {
            self.reasoningMessage += reasoning
        }
    }
} catch APIError.responseUnsuccessful(let description, let statusCode) {
    self.errorMessage = "Network error with status code: \(statusCode) and description: \(description)"
} catch {
    self.errorMessage = error.localizedDescription
}

Notes

  • The DeepSeek API is compatible with OpenAI's format but uses different model names
  • Use .custom("deepseek-reasoner") to specify the DeepSeek model
  • The reasoningContent field is optional and specific to DeepSeek's API
  • Error handling follows the same pattern as standard OpenAI requests.

For more inofrmation about the DeepSeek api visit its documentation.

SwiftOpenAI v3.9.9

02 Feb 06:31
c581d02
Compare
Choose a tag to compare

OpenRouter

Image

OpenRouter provides an OpenAI-compatible completion API to 314 models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

// Creating the service

let apiKey = "your_api_key"
let servcie = OpenAIServiceFactory.service(apiKey: apiKey, 
   overrideBaseURL: "https://openrouter.ai", 
   proxyPath: "api",
   extraHeaders: [
      "HTTP-Referer": "<YOUR_SITE_URL>", // Optional. Site URL for rankings on openrouter.ai.
         "X-Title": "<YOUR_SITE_NAME>"  // Optional. Site title for rankings on openrouter.ai.
   ])

// Making a request

let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("deepseek/deepseek-r1:free"))
let stream = service.startStreamedChat(parameters: parameters)

For more inofrmation about the OpenRouter api visit its documentation.

DeepSeek

Image

The DeepSeek API uses an API format compatible with OpenAI. By modifying the configuration, you can use SwiftOpenAI to access the DeepSeek API.

// Creating the service

let apiKey = "your_api_key"
let service = OpenAIServiceFactory.service(
   apiKey: apiKey,
   overrideBaseURL: "https://api.deepseek.com")

// Making a request

let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("deepseek-reasoner"))
let stream = service.startStreamedChat(parameters: parameters)

For more inofrmation about the DeepSeek api visit its documentation.

SwiftOpenAI v3.9.8

23 Jan 08:23
6f1a8dd
Compare
Choose a tag to compare

What's Changed

Full Changelog: v.3.9.6...v3.9.8