Releases: jamesrochabrun/SwiftOpenAI
Function Calling Required
Assistants API V2
https://platform.openai.com/docs/assistants/whats-new

Migrated SwiftOpenAI to assistants V2, if you need V1 support make sure to use v2.3
Check OpenAI migration guide.
We have changed the way that tools and files work in the Assistants API between the v1 and v2 versions of the beta. Both versions of the beta continue to be accessible via the API today, but we recommend migrating to the newest version of our APIs as soon as feasible. We will deprecate v1 of the beta by the end of 2024.
If you do not use tools or files with the Assistants API today, there should be no changes required for you to migrate from the v1
version to the v2
version of the beta. Simply pass the v2
beta version header and/or move to the latest version of our Node and Python SDKs!
What has changed
The v2 version of the Assistants API contains the following changes:
Tool rename: The retrieval tool has been renamed to the file_search tool
Files belong to tools: Files are now associated with tools instead of Assistants and Messages. This means that:
AssistantFile and MessageFile objects no longer exist.
Instead of AssistantFile and MessageFile, files are attached to Assistants and Threads using the new tool_resources object.
The tool_resources for the code interpreter tool are a list of file_ids.
The tool_resources for the file_search tool are a new object called a vector_stores.
Messages now have an attachments, rather than a file_ids parameter. Message attachments are helpers that add the files to a Thread’s tool_resources.

Assistants have tools and tool_resources instead of file_ids. The retrieval tool is now the file_search tool. The tool_resource for the file_search tool is a vector_store.

Threads can bring their own tool_resources into a conversation.

Messages have attachments instead of file_ids. attachments are helpers that add files to the Thread’s tool_resources.
All v1 endpoints and objects for the Assistants API can be found under the Legacy section of the API reference.
- Support for batch.
- Support for vector stores.
- Support for vector store files
- Support for vector store file batch.
New interfaces:
// MARK: Batch
/// Creates and executes a batch from an uploaded file of requests
///
/// - Parameter parameters: The parameters needed to create a batch.
/// - Returns: A [batch](https://platform.openai.com/docs/api-reference/batch/object) object.
/// - Throws: An error if the request fails
///
/// For more information, refer to [OpenAI's Batch API documentation](https://platform.openai.com/docs/api-reference/batch/create).
func createBatch(
parameters: BatchParameter)
async throws -> BatchObject
/// Retrieves a batch.
///
/// - Parameter id: The identifier of the batch to retrieve.
/// - Returns: A [BatchObject](https://platform.openai.com/docs/api-reference/batch/object) matching the specified ID..
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch documentation](https://platform.openai.com/docs/api-reference/batch/retrieve).
func retrieveBatch(
id: String)
async throws -> BatchObject
/// Cancels an in-progress batch.
///
/// - Parameter id: The identifier of the batch to cancel.
/// - Returns: A [BatchObject](https://platform.openai.com/docs/api-reference/batch/object) matching the specified ID..
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch documentation](https://platform.openai.com/docs/api-reference/batch/cancel)
func cancelBatch(
id: String)
async throws -> BatchObject
/// List your organization's batches.
///
/// - Parameters:
/// - after: A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
/// - limit: A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
/// - Returns: An `OpenAIResponse<BatchObject>` containing a list of paginated [Batch](https://platform.openai.com/docs/api-reference/batch/object) objects.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch API documentation](https://platform.openai.com/docs/api-reference/batch/list).
func listBatch(
after: String?,
limit: Int?)
async throws -> OpenAIResponse<BatchObject>
// MARK: Vector Store
/// Create a vector store.
///
/// - Parameter parameters: The parameters needed to create a batc,.
/// - Returns: A [Vector store](https://platform.openai.com/docs/api-reference/vector-stores) object.
/// - Throws: An error if the request fails
///
/// For more information, refer to [OpenAI's Vector store API documentation](https://platform.openai.com/docs/api-reference/vector-stores/create).
func createVectorStore(
parameters: VectorStoreParameter)
async throws -> VectorStoreObject
/// Returns a list of vector stores.
///
/// - Parameter limit: A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
/// - Parameter order: Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.
/// - Parameter after: A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
/// - Parameter before: A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
/// - Returns: A list of [VectorStoreObject](https://platform.openai.com/docs/api-reference/vector-stores) objects.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Vector stores API documentation](https://platform.openai.com/docs/api-reference/vector-stores/list).
func listVectorStores(
limit: Int?,
order: String?,
after: String?,
before: String?)
async throws -> OpenAIResponse<VectorStoreObject>
/// Retrieves a vector store.
///
/// - Parameter id: The ID of the vector store to retrieve.
/// - Returns: A [Vector Store](https://platform.openai.com/docs/api-reference/vector-stores) matching the specified ID..
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch documentation](https://platform.openai.com/docs/api-reference/vector-stores/retrieve).
func retrieveVectorStore(
id: String)
async throws -> VectorStoreObject
/// Modifies a vector store.
///
/// - Parameter id: The ID of the vector store to modify.
/// - Returns: A [Vector Store](https://platform.openai.com/docs/api-reference/vector-stores) matching the specified ID..
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch documentation](https://platform.openai.com/docs/api-reference/vector-stores/modify).
func modifyVectorStore(
id: String)
async throws -> VectorStoreObject
/// Delete a vector store.
///
/// - Parameter id: The ID of the vector store to delete.
/// - Returns: A Deletion status.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Batch documentation](https://platform.openai.com/docs/api-reference/vector-stores/modify).
func deleteVectorStore(
id: String)
async throws -> DeletionStatus
// MARK: Vector Store Files
/// Create a vector store file by attaching a [File](https://platform.openai.com/docs/api-reference/files) to a vector store.
///
/// - Parameter vectorStoreID: The ID of the vector store for which to create a File.
/// - Parameter parameters: The paramaters needed to create a vector store File.
/// - Returns: A [VectorStoreFileObject](https://platform.openai.com/docs/api-reference/vector-stores-files/file-object)
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Vectore store file documentation.](https://platform.openai.com/docs/api-reference/vector-stores-files/createFile).
func createVectorStoreFile(
vectorStoreID: String,
parameters: VectorStoreFileParameter)
async throws -> VectorStoreFileObject
/// Returns a list of vector ...
Bug Fixes
- Bug Fixes
- Assistants API Response format fix.
Updating Swift Package to the latest Open AI updates.
Updating Swift Package to the latest Open AI updates
Apr 9th, 2024
Released GPT-4 Turbo with Vision in general availability in the API
Apr 4th, 2024
Added support for seed in the fine-tuning API
Added support for checkpoints in the fine-tuning API
Added support for adding Messages when creating a Run in the Assistants API
Apr 1st, 2024
Added support for filtering Messages by run_id in the Assistants API
Mar 29th, 2024
Added support for temperature and assistant message creation in the Assistants API
Third Party Library AI Proxy
The AI Proxy team made this contribution independently. SwiftOpenAI's owner is not involved in its development but accepts it in the spirit of open-source collaboration. It is added for convenience, and its use is at the discretion of the developer.
video.mp4
Protect your OpenAI key without a backend.
What is it?
AIProxy is a backend for AI apps that proxies requests from your app to OpenAI. You can use this service to avoid exposing your OpenAI key in your app. We offer AIProxy support so that developers can build and distribute apps using SwiftOpenAI.
How does my SwiftOpenAI code change?
SwiftOpenAI supports proxying requests through AIProxy with a small change to your integration code.
Instead of initializing service with:
let apiKey = "your_openai_api_key_here"
let service = OpenAIServiceFactory.service(apiKey: apiKey)
Use:
#if DEBUG && targetEnvironment(simulator)
let service = OpenAIServiceFactory.service(
aiproxyPartialKey: "hardcode_partial_key_here",
aiproxyDeviceCheckBypass: "hardcode_device_check_bypass_here"
)
#else
let service = OpenAIServiceFactory.service(
aiproxyPartialKey: "hardcode_partial_key_here"
)
#endif
The aiproxyPartialKey and aiproxyDeviceCheckBypass values are provided to you on the AIProxy developer dashboard.
What is the aiproxyDeviceCheckBypass constant?
AIProxy uses Apple's DeviceCheck to ensure that requests received by the backend originated from your app on a legitimate Apple device. However, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to constantly build and run on device during development, AIProxy provides a way to skip the DeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets the token, they can make requests to your AIProxy project without including a DeviceCheck token, and thus remove one level of protection.
What is the aiproxyPartialKey constant?
This constant is intended to be included in the distributed version of your app. As the name implies, it is a partial representation of your OpenAI key. Specifically, it is one half of an encrypted version of your key. The other half resides on AIProxy's backend. As your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used to fulfill the request to OpenAI.
How to setup my project on AIProxy?
Please see the AIProxy integration guide
Contributors of SwiftOpenAI shall not be liable for any damages or losses caused by third parties. Contributors of this library provide third party integrations as a convenience. Any use of a third party's services are assumed at your own risk.
Assistant API Stream
Assistants API stream support.
You can stream events from the Create Thread and Run, Create Run, and Submit Tool Outputs endpoints by passing "stream": true. The response will be a Server-Sent events stream.
In Swift:
/// Creates a thread and run with stream enabled.
///
/// - Parameter parameters: The parameters needed to create a thread and run.
/// - Returns: An AsyncThrowingStream of [AssistantStreamEvent](https://platform.openai.com/docs/api-reference/assistants-streaming/events) objects.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Run API documentation](https://platform.openai.com/docs/api-reference/runs/createThreadAndRun).
func createThreadAndRunStream(
parameters: CreateThreadAndRunParameter)
async throws -> AsyncThrowingStream<AssistantStreamEvent, Error>
/// Create a run with stream enabled.
///
/// - Parameter threadID: The ID of the thread to run.
/// - Parameter parameters: The parameters needed to build a Run.
/// - Returns: An AsyncThrowingStream of [AssistantStreamEvent](https://platform.openai.com/docs/api-reference/assistants-streaming/events) objects.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Run API documentation](https://platform.openai.com/docs/api-reference/runs/createRun).
func createRunStream(
threadID: String,
parameters: RunParameter)
async throws -> AsyncThrowingStream<AssistantStreamEvent, Error>
/// When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request. Stream enabled
///
/// - Parameter threadID: The ID of the [thread](https://platform.openai.com/docs/api-reference/threads) to which this run belongs.
/// - Parameter runID: The ID of the run that requires the tool output submission.
/// - Parameter parameters: The parameters needed for the run tools output.
/// - Returns: An AsyncThrowingStream of [AssistantStreamEvent](https://platform.openai.com/docs/api-reference/assistants-streaming/events) objects.
/// - Throws: An error if the request fails.
///
/// For more information, refer to [OpenAI's Run API documentation](https://platform.openai.com/docs/api-reference/runs/submitToolOutputs).
func submitToolOutputsToRunStream(
threadID: String,
runID: String,
parameters: RunToolsOutputParameter)
async throws -> AsyncThrowingStream<AssistantStreamEvent, Error>
Added demo project/tutorial based on Python tutorial.
Adding latest changes from OpenAI API https://platform.openai.com/docs/changelog
Adding latest changes from OpenAI API https://platform.openai.com/docs/changelog
Feb 9th, 2024
Added timestamp_granularities parameter to the Audio API
Feb 1st, 2024
Released gpt-3.5-turbo-0125, an updated GPT-3.5 Turbo model
Jan 25th, 2024
Released embedding V3 models and an updated GPT-4 Turbo preview
Added dimensions parameter to the Embeddings API
Dec 20th, 2023
Added additional_instructions parameter to run creation in the Assistants API
Dec 15th, 2023
Added logprobs and top_logprobs parameters to the Chat Completions API
Dec 14th, 2023
Changed function parameters argument on a tool call to be optional.
Azure OpenAI
v1.5
Log probs are now available with chat completions.
- Adding support for log probs in chat completion API https://platform.openai.com/docs/api-reference/chat/create#chat-create-logprobs
- Updated Chat demos with new log probs parameters.

