Skip to content

Releases: Azure-Samples/azure-search-openai-demo

2025-05-23: Optional feature for agentic retrieval from Azure AI Search

23 May 17:06
1b9885c
Compare
Choose a tag to compare

This release includes an exciting new option to turn on an agentic retrieval API from Azure AI Search (currently in public preview).
Read the docs about it here:
https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/docs/agentic_retrieval.md

You can also watch this talk from @mattgotteiner and @pamelafox at Microsoft Build 2025 about agentic retrieval:
https://build.microsoft.com/en-US/sessions/BRK142

Please share your feedback in either the issue tracker or discussions here. Since the retrieval API is in public preview, this is a great time to give feedback to the AI Search team.

What's Changed

Full Changelog: 2025-05-08...2025-05-23

2025-05-08: Default to text-embedding-3-large with compression, GlobalStandard SKU

09 May 06:44
faf0d46
Compare
Choose a tag to compare

This release upgrades the infrastructure and code to default to the text-embedding-3-large model from OpenAI. The model has a maximum dimensions of 3072, but we are using BinaryQuantizationCompression and truncating the dimensions to 1024, with oversampling and rescoring enabled. That means the embeddings will be stored efficiently, but search quality should remain high.
Learn more about compression from this RAG time episode or Azure AI Search documentation.

If you are already using the repository and don't wish to use the new embedding model, you can continue to use text-embedding-ada-002. You may need to set azd environment variables if they aren't already set, see the embedding models customization guide. If you want to switch over to the new embedding model, you will either need to re-ingest your data from scratch in a new index, or you will need to add a new field for the new model and re-generate embeddings for just that field. The code now has a variable for the embedding column field, so it should be possible to have a search index with fields for two different embedding models.

As part of this change, all model deploments now default to the GlobalStandard SKU. We made that change since it is easier to find regions in common across the many models used by this repository when using the GlobalStandard SKU. However, if you can't use that SKU for whatever reason, you can still customize the SKU using the parameters described in the documentation.

Please let us know in the issue tracker if you encounter any issues with the new default embedding model configuration.

What's Changed

New Contributors

Full Changelog: 2025-04-02...2025-05-08

2025-04-02: Support for reasoning models and token usage display

03 Apr 02:40
56294c9
Compare
Choose a tag to compare

You can now optionally use a reasoning model (o1 or o3-mini) for all chat completion requests, following the reasoning guide.

When using a reasoning model, you can select the reasoning effort (low/medium/high):

Screenshot of developer settings with reasoning model

For all models, you can now see token usage in the "Thought process" tab:

Display of token usage counts

Reasoning models incur more latency, due to the thinking process, so it is an option for developers to try, but not necessarily what you want to use for most RAG domains.

This PR also includes several fixes for performance, Windows support, and deployment.

What's Changed

Full Changelog: 2025-03-26...2025-04-02

2025-03-26: Removal of conversation truncation logic

26 Mar 22:43
cb5149d
Compare
Choose a tag to compare

Previously, we had logic that would truncate conversation history by counting the tokens (with tiktoken) and only keeping the messages that fit inside the context window. Now that we are using a model with a higher context window (128K) and most models have that high limit, we have removed that truncation logic, so all conversations will be sent in full to the model.
See the pull request for more reasoning behind the decision.

 ## What's Changed

  • Remove token-counting library for conversation history truncation by @pamelafox in #2449

Full Changelog: 2025-03-25...2025-03-26

2025-03-25: Chat completion model is gpt-4o-mini by default

24 Mar 23:38
236b592
Compare
Choose a tag to compare

The infrastructure for this project was previously deploying a gpt-35-turbo model. We have since upgraded to the more recent gpt-4o-mini model, which has a much higher context window (128K) and cheaper per-token costs.
In terms of performance, it gives similarly accurate responses, but it does tend to produce more verbose responses. You can see the comparisons on the sample data in the evals folder, and you can read my blog post summarizing the differences. You may want to adjust the prompt to generate shorter results if you find the new answers to be too verbose.

For developers with existing deployments, it will continue to use gpt-35-turbo. You can follow the steps in the docs to use gpt-4o-mini or other models.

 ## What's Changed

Full Changelog: 2025-03-21...2025-03-25

Container apps deployment now allows scaling to zero

21 Mar 22:24
88f987e
Compare
Choose a tag to compare

To lower costs for developers experimenting, we've adjusted the scaling rules for the container apps deployment. See the productionizing guide for tips of what to change if you're preparing code based on this repository for production:
https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/docs/productionizing.md#azure-container-apps

What's Changed

Full Changelog: 2025-03-19...2025-03-21

2025-03-19: Query rewriting from Azure AI Search

19 Mar 23:44
62f8b58
Compare
Choose a tag to compare

This release adds a new optional feature, the query rewriting option from Azure AI Search. This is distinct from the already existing query rewriting step in our RAG flows, which incorporates conversation history. The query rewriting from Azure AI Search focuses on expanding the query to semantically similar queries that can improve retrieval.

Learn more from the search team in this blog post:
https://techcommunity.microsoft.com/blog/azure-ai-services-blog/raising-the-bar-for-rag-excellence-query-rewriting-and-new-semantic-ranker/4302729

Enable the feature following the documentation:
https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/docs/deploy_features.md#enabling-query-rewriting

What's Changed

New Contributors

Full Changelog: 2025-02-20...2025-03-19

2025-02-20: Safety evaluations

20 Feb 19:41
31ea846
Compare
Choose a tag to compare

This project now includes optional AI Safety evaluations, using an Azure AI Project and the Azure Azure AI evaluation SDK.
See documentation for instructons on running the evaluations.

What's Changed

  • Upgrading openai and removing numpy dependency by @pamelafox in #2362
  • Bump Azure/setup-azd from 2.0.0 to 2.1.0 in the github-actions group by @dependabot in #2366
  • AI Safety evaluations (with AI Project provisioning) by @pamelafox in #2370

Full Changelog: 2025-02-13...2025-02-20

2025-02-13: Italian localization

14 Feb 06:57
efbf397
Compare
Choose a tag to compare

The UI is now available in Italian, so the text will display in Italian if the user's browser is configured accordingly, or if the app has the language picker enabled and the user picks italian.

Screenshot of RAG chat app in Italian

What's Changed

New Contributors

Full Changelog: 2025-02-11...2025-02-13

2025-02-11: Evaluation scripts and workflow

11 Feb 08:19
e873ba9
Compare
Choose a tag to compare

For a long time, we've directed developers to follow the steps in ai-rag-chat-evaluator to run evaluations on this app. To make it easier, we've now integrated evaluation directly into the repository, both as CLI scripts and GitHub Actions workflow.

Learn more from the evaluation guide or watch this video about evaluation.

What's Changed

New Contributors

Full Changelog: 2025-02-07...2025-02-11