Skip to content

olivMertens/gpt4oaudioVoiceRagApis

Repository files navigation

VoiceRAG: An Application Pattern for RAG + Voice Using Azure AI Search and the GPT-4o Realtime API for Audio

Open in GitHub Codespaces Open in Dev Containers

This repo contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio. We describe the pattern in more detail in this blog post, and you can see this sample app in action in this short video.

This demo is customized for a fake company travels with aircraft named STU / MS customer assistant with some specific data and tools ( apis for the tools on booking flight or informations about customer ) take a look at the data located in the folder on these files

app\api\data\load_data.py

and the FAQ data ( included in french language and embedded / pushed in Ai search during the build ) located in the folder

data\faq.json

take a look also at the file located in app\app.py it include the prompt system ( and you can see it is actually limited for english language for the response) and the file ragtools.py that include the tools for the api and access to Ai search.

Features

  • Voice interface: The app uses the browser's microphone to capture voice input, and sends it to the backend where it is processed by the Azure OpenAI GPT-4o Realtime API.
  • RAG (Retrieval Augmented Generation): The app uses the Azure AI Search service to answer questions about a knowledge base, and sends the retrieved documents to the GPT-4o Realtime API to generate a response.
  • Audio output: The app plays the response from the GPT-4o Realtime API as audio, using the browser's audio capabilities.
  • Citations: The app shows the search results that were used to generate the response.
  • Apis : The app will use a container app Api to give some information with the tools

Architecture Diagram

The RTClient in the frontend receives the audio input, sends that to the Python backend which uses an RTMiddleTier object to interface with the Azure OpenAI real-time API, and includes a tool for searching Azure AI Search.

Diagram of real-time RAG pattern

This repository includes infrastructure as code and a Dockerfile to deploy the app to Azure Container Apps, but it can also be run locally as long as Azure AI Search and Azure OpenAI services are configured.

Getting Started

You have a few options for getting started with this template. The quickest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally. You can also use a VS Code dev container

GitHub Codespaces

You can run this repo virtually by using GitHub Codespaces, which will open a web-based VS Code in your browser:

Open in GitHub Codespaces

Once the codespace opens (this may take several minutes), open a new terminal and proceed to deploy the app.

VS Code Dev Containers

You can run the project in your local VS Code Dev Container using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a new terminal, and proceed to deploying the app.

Local environment

  1. Install the required tools:

    • Azure Developer CLI
    • Node.js
    • NPM
    • Docker
    • Dev tools C++
    • Python >=3.11
      • Important: Python and the pip package manager must be in the path in Windows for the setup scripts to work.
      • Important: Ensure you can run python --version from console. On Ubuntu, you might need to run sudo apt install python-is-python3 to link python to python3.
    • Git
    • Powershell - For Windows users only.
  2. Clone the repo (git clone https://github.com/olivMertens/gpt4oaudioVoiceRagApis.git)

  3. Proceed to the next section to deploy the app.

Deploying the app

The steps below will provision Azure resources and deploy the application code to Azure Container Apps.

  1. Login to your Azure account:

    azd auth login

    For GitHub Codespaces users, if the previous command fails, try:

     azd auth login --use-device-code
  2. Create a new azd environment:

    azd env new

    Enter a name that will be used for the resource group. This will create a new folder in the .azure folder, and set it as the active environment for any calls to azd going forward.

  3. This is the point where you can customize the deployment by setting azd environment variables, in order to use existing services or customize the voice choice. We will be customizing OpenAI endpoint to use single deployment of embedding model

  4. Run this command to ensure that the infrastructure does not make a brand new OpenAI service:

    azd env set AZURE_OPENAI_REUSE_EXISTING true
  5. Run this command to ensure that the infrastructure assigns the proper RBAC roles for accessing the OpenAI resource:

    azd env set AZURE_OPENAI_RESOURCE_GROUP yourresourcegroupname
  6. Run this command to point the app code at your Azure OpenAI endpoint:

    azd env set AZURE_OPENAI_ENDPOINT https://cog-fbgwp5e2xzeezxwoo.openai.azure.com
  7. Run this command to point the app code at your Azure OpenAI real-time deployment. Note that the deployment name may be different from the model name:

    azd env set AZURE_OPENAI_REALTIME_DEPLOYMENT gpt-4o-realtime-preview or gpt-4o-mini-realtime-preview
    azd env set AZURE_OPENAI_EMBEDDING_MODEL text-embedding-3-large
    
  8. Run this single command to provision the resources, deploy the code, and setup integrated vectorization for the sample data:

    azd up
    • Important: Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run azd down or delete the resources manually to avoid unnecessary spending. The delete could be long enough to delete the resources, so be patient.
    • Important: the deletion for model preview are made in soft delete and stay in the subscription for 30 days, you can delete it manually in the Azure portal in your azure open ai resource, important because the preview are limited in number of deployment and you can't create a new one if you reach the limit. Instead you could use the hard reset command azd down --purge

    screenshot soft delete aoai

    • You will be prompted to select two locations, one for the majority of resources and one for the OpenAI resource, which is currently a short list. That location list is based on the OpenAI model availability table and may become outdated as availability changes.for some information about quota and region available for the model realtime preview, you can check the Azure OpenAI documentation
    • You can also update the quota for the model directly in Azure Ai foundry , the Deployment section and update deployment screenshot quota Ai foundry
  9. After the application has been successfully deployed you will see a URL printed to the console. Navigate to that URL to interact with the app in your browser. there will be 2 urls one for the api and one for the app if you want to test the api click on the url and add /health at the end of the url for see if the api is running if you want to test the app click on the url and you will see the start screen of the app click the "Start conversation button", say "Hello", and then ask a question about your data like "what is the status of the flight STU1234?"

You can also now run the app locally by following the instructions in the next section.

Development local server

You can run this app locally using either the Azure services you provisioned by following the deployment instructions, or by pointing the local app at already existing services.

  1. If you deployed with azd up, you should see a app/backend/.env file with the necessary environment variables.

  2. If did not use azd up, you will need to create app/backend/.env file with the following environment variables:

    AZURE_OPENAI_ENDPOINT=wss://<your instance name>.openai.azure.com
    AZURE_OPENAI_REALTIME_DEPLOYMENT=gpt-4o-realtime-preview // gpt-4o-mini-realtime-preview
    AZURE_OPENAI_REALTIME_VOICE_API_VERSION="2024-10-01-preview"
    AZURE_OPENAI_REALTIME_VOICE_CHOICE=<choose one: alloy, ash, coral, echo, fable, onyx, nova, sage and shimmer
    AZURE_OPENAI_API_KEY=<your api key>
    AZURE_SEARCH_ENDPOINT=https://<your service name>.search.windows.net
    AZURE_SEARCH_INDEX=<your index name>
    AZURE_SEARCH_API_KEY=<your api key>

    To use Entra ID (your user when running locally, managed identity when deployed) simply don't set the keys.

  3. You have to install the requirements for the api and backend:

    cd app/api
    pip install -r requirements.txt
    cd app/backend
    pip install -r requirements.txt
  4. If you have to relaunch the embedding for Ai search, you can run the following command:

    cd app/backend
    py setup_intvect.py /* dont forget to modify the path ( in local ) for give the local access data/faq.json change to ../../data/faq.json */
  5. Run this command to start the app:

on Windows:

cd ../../
pwsh .\scripts\start.ps1

or for Linux/Mac:

./scripts/start.sh

you can verify if the apis are running by navigating to http://localhost:8765/health

  1. The app is available on http://localhost:8000.

    Once the app is running, when you navigate to the URL above you should see the start screen of the app: app screenshot

    To try out the app, click the "microphone button", say "Hello", and then ask a question about your data like "what is the status of the flight STU1234?".

Guidance

Costs

Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can try the Azure pricing calculator for the resources below.

  • Azure Container Apps: Consumption plan with 1 CPU core, 2.0 GB RAM. Pricing with Pay-as-You-Go. Pricing
  • Azure OpenAI: Standard tier, gpt-4o-realtime and text-embedding-3-large models. Pricing per 1K tokens used. Pricing
  • Azure AI Search: Standard tier, 1 replica, free level of semantic search. Pricing per hour. Pricing
  • Azure Blob Storage: Standard tier with ZRS (Zone-redundant storage). Pricing per storage and read operations. Pricing
  • Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing

To reduce costs, you can switch to free SKUs for various services, but those SKUs have limitations.

⚠️ To avoid unnecessary costs, remember to take down your app if it's no longer in use, either by deleting the resource group in the Portal or running azd down.

Security

This template uses Managed Identity to eliminate the need for developers to manage these credentials. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials.To ensure best practices in your repo we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repos.

Resources

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published