Skip to content

99x-incubator/azure_pr_review_agent_azure_func

Repository files navigation

Azure Function for PR Review with AI Suggestions πŸ€–

Automatically review pull requests in Azure DevOps using AI-powered code analysis. This Azure Function integrates with Azure DevOps to provide intelligent code reviews, adding comments directly to your PRs and optionally creating improvement PRs with suggested fixes.

πŸ”„ How It Works

  1. When a PR is created or updated in Azure DevOps, a webhook event is triggered
  2. The Azure Function receives the webhook payload and validates it
  3. The function checks if the PR is eligible for review (not a draft, not AI-generated)
  4. The function loads the review guidelines from the specified source
  5. The function initializes the selected AI model based on the MODEL_TYPE setting
  6. For each changed file in the PR:
    • The function retrieves the old and new content
    • The AI model analyzes the changes and generates comments
    • Comments are added to the PR at specific line numbers
  7. If corrections are available and CREATE_NEW_PR is true:
    • A new branch is created based on the source branch
    • The corrected files are committed to the new branch
    • A new PR is created with the AI-suggested improvements
PR Review Process Diagram

✨ Features

  • πŸ”„ Automatic triggering on Azure DevOps PR webhook events
  • 🧠 AI-powered code analysis (supports multiple AI models)
  • πŸ’¬ Contextual comments added directly to PR lines
  • πŸ› οΈ Optional creation of improvement PRs with AI-suggested fixes
  • πŸ“ Customizable review guidelines
  • πŸ”€ Support for multiple files in a single PR
  • πŸ€– Configurable AI model selection via environment variables

πŸ“‹ Prerequisites

  • Azure account with Function App creation permissions
  • Azure DevOps project with admin access
  • At least one AI API key (Azure OpenAI, OpenAI, or Google Gemini)
  • Code review guidelines document

πŸš€ Setup Guide

1. Create a resource group

Using Azure Portal
  • Azure Portal > Resource Groups > Create
image image
Using CLI
  • azure login
  • az group create --name <RESOURCE_GROUP_NAME> --location <REGION>

2. Create an Azure Function App

Using Azure Portal
  • Use Node.js runtime stack
image image
  • Enable Azure OpenAI when creating the function app

  • image
Using CLI
az functionapp create --resource-group <RESOURCE_GROUP_NAME> --consumption-plan-location <REGION> --runtime node --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME>

2. Deploy the Function

Using Visual Studio Code
  • Open the project in VS Code
  • Install the Azure Functions extension
  • Sign in to your Azure account
  • ctrl+shift+P on the project and select "Deploy to Function App"
  • Select your target Function App
Using CLI
cd azure-function-pr-review
npm install
func azure functionapp publish <YOUR_FUNCTION_APP_NAME>

3. Create Azure OpenAI and Deploy a Model

image image
image image
image image

If you want to deploy a different model in Azure,

  • Go to AI Foundry
  • Create an Azure AI Hub(Keep in mind that you have to make sure the region you are selecting is supporting the model that you are going to deploy. E.g.:As of February 2025, the DeepSeek R1 model is available in East US, East US 2, West US, West US 3, South Central US, and North Central US regions only)
  • Go to the resource and launch Azure AI Foundry
  • Create a project
  • Deploy the model you want

4. Configure Application Settings βš™οΈ

In the Azure Portal, navigate to your Function App > Configuration and add the following application settings:

Environmental Variable Name Description
AZURE_PAT Personal Access Token for Azure DevOps with Code (Read & Write) permissions
AZURE_REPO Default repository name (optional if provided in webhook)
INSTRUCTION_SOURCE Path or URL to review guidelines file
CREATE_NEW_PR Set to "true" to create new PRs with AI suggestions
MODEL_TYPE AI model to use: "azure", "openai", "gemini", "deepseek", "anthropic", or "groq"
GEMINI_API_KEY Google Gemini API key (required if MODEL_TYPE is "gemini")
OPENAI_API_KEY OpenAI API key (required if MODEL_TYPE is "openai")
AZURE_OPENAI_API_KEY Azure OpenAI API key (required if MODEL_TYPE is "azure")
AZURE_OPENAI_API_INSTANCE_NAME Azure OpenAI instance name (required if MODEL_TYPE is "azure")
AZURE_OPENAI_API_DEPLOYMENT_NAME Azure OpenAI deployment name (required if MODEL_TYPE is "azure")
AZURE_OPENAI_API_VERSION Azure OpenAI API version (defaults to "2023-12-01-preview")
DEEPSEEK_API_KEY DeepSeek API key (required if MODEL_TYPE is "deepseek" or "deepseek-r1")
ANTHROPIC_API_KEY Anthropic API key (required if MODEL_TYPE is "anthropic" or "claude")
GROQ_API_KEY Groq API key (required if MODEL_TYPE is "groq")

5. Configure Azure DevOps Webhook

  1. Go to Project Settings > Service Hooks
  2. Create new webhook with:
    • Trigger: Pull request created
    • URL:
      • Using VSCode: ctrl+shift+P -> Azure Functions: Copy Function URL
      • Using CLI: Replace "==" in the end of the URL with "%3D%3D" after copying it before pasting in the webhook (This is happening because URL encoding)
        FOR /F "delims=" %a IN ('az functionapp function show --resource-group <RESOURCE_GROUP> --name <FUNCTION_APP> --function-name PRReviewTrigger --query invokeUrlTemplate -o tsv') DO SET "URL=%a"
         FOR /F "delims=" %b IN ('az functionapp function keys list --resource-group <RESOURCE_GROUP> --name <FUNCTION_APP> --function-name PRReviewTrigger --query default -o tsv') DO SET "KEY=%b"
         ECHO %URL%?code=%KEY%
Image 1 Image 2
Image 3 Image 4
Image 5 Image 6

🧠 AI Model Selection

You can configure multiple API keys in your environment variables and switch between models by changing only the MODEL_TYPE setting. This allows you to:

  • πŸ”„ Easily switch between different AI providers
  • πŸ’° Optimize for cost by selecting the most economical option
  • πŸš€ Choose the model that performs best for your specific codebase
  • πŸ”’ Have fallback options if one service is unavailable

Below shown is a AI Models Comparison Table as of April 2025 Image 7

Notes:
  1. All prices are in USD and current as of April 2025
  2. "Cached" refers to cached input pricing, which is typically lower than standard input pricing
  3. Many providers offer batch processing with discounts (typically 25-50%)
  4. Context length refers to the maximum number of tokens the model can process in a single request
  5. Parameter sizes are not explicitly stated for many models, especially the newest ones
  6. Deployment options (global, regional) may affect pricing, especially for Azure OpenAI
  7. Some models offer time-based discounts (e.g., DeepSeek's off-peak pricing)
  8. Actual performance may vary based on specific use cases and implementation
πŸ“š Resources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •