You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -878,7 +878,7 @@ public struct ChatCompletionObject: Decodable {
878
878
Usage
879
879
```swift
880
880
let prompt ="Tell me a joke"
881
-
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt41106Preview)
881
+
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)
882
882
let chatCompletionObject = service.startChat(parameters: parameters)
883
883
```
884
884
@@ -966,7 +966,7 @@ public struct ChatCompletionChunkObject: Decodable {
966
966
Usage
967
967
```swift
968
968
let prompt ="Tell me a joke"
969
-
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt41106Preview)
969
+
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)
970
970
let chatCompletionObject =tryawait service.startStreamedChat(parameters: parameters)
971
971
```
972
972
@@ -1042,14 +1042,14 @@ For more details about how to also uploading base 64 encoded images in iOS check
1042
1042
1043
1043
### Vision
1044
1044
1045
-
[Vision](https://platform.openai.com/docs/guides/vision) API is available for use; developers must access it through the chat completions API, specifically using the gpt-4-vision-preview model. Using any other model will not provide an image description
1045
+
[Vision](https://platform.openai.com/docs/guides/vision) API is available for use; developers must access it through the chat completions API, specifically using the gpt-4-vision-preview model or gpt-4o model. Using any other model will not provide an image description
1046
1046
1047
1047
Usage
1048
1048
```swift
1049
1049
let imageURL ="https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
1050
1050
let prompt ="What is this?"
1051
1051
let messageContent: [ChatCompletionParameters.Message.ContentType.MessageContent] = [.text(prompt), .imageUrl(imageURL)] // Users can add as many `.imageUrl` instances to the service.
1052
-
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .contentArray(messageContent))], model: .gpt4VisionPreview)
1052
+
let parameters =ChatCompletionParameters(messages: [.init(role: .user, content: .contentArray(messageContent))], model: .gpt4o)
1053
1053
let chatCompletionObject =tryawait service.startStreamedChat(parameters: parameters)
Copy file name to clipboardExpand all lines: Sources/OpenAI/Public/Parameters/Model.swift
+15
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,19 @@ import Foundation
12
12
publicenumModel{
13
13
14
14
/// Chat completion
15
+
16
+
/// ### Omicron model
17
+
/// As of 2024-05-13, this is the latest and greatest from OpenAI.
18
+
/// From their [docs](https://platform.openai.com/docs/models/gpt-4o):
19
+
///
20
+
/// > GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs
21
+
/// > and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—
22
+
/// > it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance
23
+
/// > across non-English languages of any of our models
24
+
///
25
+
case gpt4o // Points to gpt-4o-2024-05-13
26
+
case gpt4o20240513 // 128k context window with training data up to Oct 2023
27
+
15
28
case gpt35Turbo
16
29
case gpt35Turbo1106 // Most updated - Supports parallel function calls
17
30
/// The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls. Returns a maximum of 4,096 output tokens. [Learn more](https://openai.com/blog/new-embedding-models-and-api-updates#:~:text=Other%20new%20models%20and%20lower%20pricing).
0 commit comments