- [feature] Adds support for configuring image generation properties,
such as aspect ratio and image size, through the new
ImageConfigstruct and its integration withGenerationConfig.
- [feature] Public Preview: Introduces
GenerativeModelSessionproviding APIs for generating structured data from Gemini via the same@Generableand@Guidemacros that are used with Foundation Models.
- [changed] The URL context tool APIs are now GA.
- [feature] Added support for implicit caching (context caching) metadata in
GenerateContentResponse. You can now accesscachedContentTokenCountandcacheTokensDetailsinUsageMetadatato see savings from cached content. See the caching documentation for more details.
- [feature] Added support for configuring thinking levels with Gemini 3 series models and onwards. (#15557)
- [fixed] Fixed support for API keys with iOS+ app
Bundle ID restrictions
by setting the
x-ios-bundle-identifierheader. (#15475)
- [feature] Added support for Server Prompt Templates.
- [changed] Renamed the
FirebaseAImodule toFirebaseAILogic. This is a non-breaking change.FirebaseAIreferences will continue to work until a future breaking change release. Going forward, imports should be changed toimport FirebaseAILogicand theFirebaseAILogicSwift Package dependency should be selected. See the Swift module name change FAQ entry for more details. - [fixed] Fixed a nanoseconds parsing issue in the Live API when receiving a
LiveServerGoingAwayNoticemessage. (#15410) - [feature] Added support for sending video frames with the Live API via the
sendVideoRealtimemethod onLiveSession. (#15432)
-
[feature] Added support for the URL context tool, which allows the model to access content from provided public web URLs to inform and enhance its responses. (#15221)
-
[changed] Using Firebase AI Logic with the Gemini Developer API is now Generally Available (GA).
-
[changed] Using Firebase AI Logic with the Imagen generation APIs is now Generally Available (GA).
-
[feature] Added support for the Live API, which allows bidirectional communication with the model in realtime.
To get started with the Live API, see the Firebase docs on Bidirectional streaming using the Gemini Live API. (#15309)
- [feature] Added support for the Code Execution tool, which enables the model to generate and run code to perform complex tasks like solving mathematical equations or visualizing data. (#15280)
- [fixed] Fixed a decoding error when generating images with the
gemini-2.5-flash-image-previewmodel usinggenerateContentStreamorsendMessageStreamwith the Gemini Developer API. (#15262)
- [feature] Added support for returning thought summaries, which are synthesized versions of a model's internal reasoning process. (#15096)
- [feature] Added support for limited-use tokens with Firebase App Check. These limited-use tokens are required for an upcoming optional feature called replay protection. We recommend enabling the usage of limited-use tokens now so that when replay protection becomes available, you can enable it sooner because more of your users will be on versions of your app that send limited-use tokens. (#15099)
- [feature] Added support for Grounding with Google Search. (#15014)
- [removed] Removed
CountTokensResponse.totalBillableCharacterswhich was deprecated in 11.15.0. UsetotalTokensinstead. (#15056)
- [fixed] Fixed
Sendablewarnings introduced in the Xcode 26 beta. (#14947) - [added] Added support for setting
titlein string, number and arraySchematypes. (#14971) - [added] Added support for configuring the "thinking" budget when using Gemini 2.5 series models. (#14909)
- [changed] Deprecated
CountTokensResponse.totalBillableCharacters; usetotalTokensinstead. Gemini 2.0 series models and newer are always billed by token count. (#14934)
-
[feature] Initial release of the Firebase AI Logic SDK (
FirebaseAI). This SDK replaces the previous Vertex AI in Firebase SDK (FirebaseVertexAI) to accommodate the evolving set of supported features and services.- The new Firebase AI Logic SDK provides preview support for the Gemini Developer API, including its free tier offering.
- Using the Firebase AI Logic SDK with the Vertex AI Gemini API is still generally available (GA).
To start using the new SDK, import the
FirebaseAImodule and use the top-levelFirebaseAIclass. See details in the migration guide . -
[fixed] Fixed
ModalityTokenCountdecoding when thetokenCountfield is omitted; this occurs when the count is 0. (#14745) -
[fixed] Fixed
Candidatedecoding whenSafetyRatingvalues are missing a category or probability; this may occur when using Gemini for image generation. (#14817)
- [added] Public Preview: Added support for specifying response modalities
in
GenerationConfig. This includes public experimental support for image generation using Gemini 2.0 Flash (gemini-2.0-flash-exp). (#14658)
Note: This feature is in Public Preview and relies on experimental models, which means that it is not subject to any SLA or deprecation policy and could change in backwards-incompatible ways. - [added] Added support for more
Schemafields:minItems/maxItems(array size limits),title(schema name),minimum/maximum(numeric ranges),anyOf(select from sub-schemas), andpropertyOrdering(JSON key order). (#14647) - [fixed] Fixed an issue where network requests would fail in the iOS 18.4
simulator due to a
URLSessionbug introduced in Xcode 16.3. (#14677)
- [added] Emits a warning when attempting to use an incompatible model with
GenerativeModelorImagenModel. (#14610)
- [feature] The Vertex AI SDK no longer requires
@preconcurrencywhen imported in Swift 6. - [feature] The Vertex AI Sample App now includes an image generation example.
- [changed] The Vertex AI Sample App is now part of the quickstart-ios repo.
- [changed] The
rolein system instructions is now ignored; no code changes are required. (#14558)
- [feature] Public Preview: Added support for
generating images
using the Imagen 3 models.
Note: This feature is in Public Preview, which means that it is not subject to any SLA or deprecation policy and could change in backwards-incompatible ways. - [feature] Added support for modality-based token count. (#14406)
- [changed] The token counts from
GenerativeModel.countTokens(...)now include tokens from the schema for JSON output and function calling; reported token counts will now be higher if using these features.
- [fixed] Fixed an issue where
VertexAI.vertexAI(app: app1)andVertexAI.vertexAI(app: app2)would return the same instance if theirlocationwas the same, including the defaultus-central1. (#14007) - [changed] Removed
format: "double"inSchema.double()since double-precision accuracy isn't enforced by the model; continue using the SwiftDoubletype when decoding data produced with this schema. (#13990)
- [feature] Vertex AI in Firebase is now Generally Available (GA) and can be
used in production apps. (#13725)
Use the Vertex AI in Firebase library to call the Vertex AI Gemini API directly from your app. This client library is built specifically for use with Swift apps, offering security options against unauthorized clients as well as integrations with other Firebase services.
Note: Vertex AI in Firebase is currently only available in Swift Package Manager and CocoaPods. Stay tuned for the next release for the Zip and Carthage distributions.
- If you're new to this library, visit the getting started guide.
- If you used the preview version of the library, visit the migration guide to learn about some important updates.
- [changed] Breaking Change: The
HarmCategoryenum is no longer nested inside theSafetySettingstruct and theunspecifiedcase has been removed. (#13686) - [changed] Breaking Change: The
BlockThresholdenum inSafetySettinghas been renamed toHarmBlockThreshold. (#13696) - [changed] Breaking Change: The
unspecifiedcase has been removed from theFinishReason,BlockReasonandHarmProbabilityenums; this scenario is now handled by the existingunknowncase. (#13699) - [changed] Breaking Change: The property
citationSourcesofCitationMetadatahas been renamed tocitations. (#13702) - [changed] Breaking Change: The initializer for
Schemais now internal; use the new type methodsSchema.string(...),Schema.object(...), etc., instead. (#13852) - [changed] Breaking Change: The initializer for
FunctionDeclarationnow accepts an array of optional parameters instead of a list of required parameters; if a parameter is not listed as optional it is assumed to be required. (#13616) - [changed] Breaking Change:
CountTokensResponse.totalBillableCharactersis now optional (Int?); it may benullin cases such as when aGenerateContentRequestcontains only images or other non-text content. (#13721) - [changed] Breaking Change: The
ImageConversionErrorenum is no longer public; image conversion errors are still reported asGenerateContentError.promptImageContentError. (#13735) - [changed] Breaking Change: The
CountTokensErrorenum has been removed; errors occurring inGenerativeModel.countTokens(...)are now thrown directly instead of being wrapped in aCountTokensError.internalError. (#13736) - [changed] Breaking Change: The enum
ModelContent.Parthas been replaced with a protocol namedPartto avoid future breaking changes with new part types. The new typesTextPartandFunctionCallPartmay be received when generating content; additionally the typesInlineDataPart,FileDataPartandFunctionResponsePartmay be provided as input. (#13767) - [changed] Breaking Change: All initializers for
ModelContentnow require the labelparts:. (#13832) - [changed] Breaking Change:
HarmCategory,HarmProbability, andFinishReasonare now structs instead of enums types and theunknowncases have been removed; in aswitchstatement, use thedefault:case to cover unknown or unhandled values. (#13728, #13854, #13860) - [changed] Breaking Change: The
Toolinitializer is now internal; use the new type methodfunctionDeclarations(_:)to create aToolfor function calling. (#13873) - [changed] Breaking Change: The
FunctionCallingConfiginitializer andModeenum are now internal; use one of the new type methodsauto(),any(allowedFunctionNames:), ornone()to create a config. (#13873) - [changed] Breaking Change: The
CandidateResponsetype is now namedCandidate. (#13897) - [changed] Breaking Change: The minimum deployment target for the SDK is now macOS 12.0; all other platform minimums remain the same at iOS 15.0, macCatalyst 15.0, tvOS 15.0, and watchOS 8.0. (#13903)
- [changed] Breaking Change: All of the public properties of
GenerationConfigare nowinternal; they all remain configurable in the initializer. (#13904) - [changed] The default request timeout is now 180 seconds instead of the
platform-default value of 60 seconds for a
URLRequest; this timeout may still be customized inRequestOptions. (#13722) - [changed] The response from
GenerativeModel.countTokens(...)now includessystemInstruction,toolsandgenerationConfigin thetotalTokensandtotalBillableCharacterscounts, where applicable. (#13813) - [added] Added a new
HarmCategory.civicIntegrityfor filtering content that may be used to harm civic integrity. (#13728) - [added] Added
probabilityScore,severityandseverityScoreinSafetyRatingto provide more fine-grained detail on blocked responses. (#13875) - [added] Added a new
HarmBlockThreshold.off, which turns off the safety filter. (#13863) - [added] Added an optional
HarmBlockMethodparametermethodinSafetySettingthat configures whether responses are blocked based on theprobabilityand/orseverityof content being in aHarmCategory. (#13876) - [added] Added new
FinishReasonvalues.blocklist,.prohibitedContent,.spiiand.malformedFunctionCallthat may be reported. (#13860) - [added] Added new
BlockReasonvalues.blocklistand.prohibitedContentthat may be reported when a prompt is blocked. (#13861) - [added] Added the
PromptFeedbackpropertyblockReasonMessagethat may be provided alongside theblockReason. (#13891) - [added] Added an optional
publicationDateproperty that may be provided inCitation. (#13893) - [added] Added
presencePenaltyandfrequencyPenaltyparameters toGenerationConfig. (#13899)
- [added] Added
Decodableconformance forFunctionResponse. (#13606) - [changed] Breaking Change: Reverted refactor of
GenerativeModelandChatas Swift actors (#13545) introduced in 11.2; The methodsgenerateContentStream,startChatandsendMessageStreamno longer need to be called withawait. (#13703)
- [fixed] Resolved a decoding error for citations without a
uriand added support for decodingtitlefields, which were previously ignored. (#13518) - [changed] Breaking Change: The methods for starting streaming requests
(
generateContentStreamandsendMessageStream) are now throwing and asynchronous and must be called withtry await. (#13545, #13573) - [changed] Breaking Change: Creating a chat instance (
startChat) is now asynchronous and must be called withawait. (#13545) - [changed] Breaking Change: The source image in the
ImageConversionError.couldNotConvertToJPEGerror case is now an enum value instead of theAnytype. (#13575) - [added] Added support for specifying a JSON
responseSchemainGenerationConfig; see control generated output for more details. (#13576)
- [feature] Added community support for watchOS. (#13215)
- [changed] Removed uses of the
gemini-1.5-flash-preview-0514model in docs and samples. Developers should now use the auto-updated versions,gemini-1.5-proorgemini-1.5-flash, or a specific stable version; see available model names for more details. (#13099) - [feature] Added community support for tvOS and visionOS. (#13090, #13092)
- [changed] Removed uses of the
gemini-1.5-pro-preview-0409model in docs and samples. Developers should now usegemini-1.5-pro-preview-0514orgemini-1.5-flash-preview-0514; see available model names for more details. (#12979) - [changed] Logged additional details when required APIs for Vertex AI are not enabled or response payloads when requests fail. (#13007, #13009)
- [feature] Initial release of the Vertex AI for Firebase SDK (public preview). Learn how to get started with the SDK in your app.