Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
174 changes: 9 additions & 165 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,181 +1,25 @@
# Revu - AI-Powered Code Review Assistant
# revu

Revu is a GitHub App that leverages LLMs to provide intelligent, context-aware code reviews for pull requests. By analyzing the entire codebase and changes, Revu offers comprehensive feedback that goes beyond simple style checks.
[📦 Installation](docs/INSTALLATION.md) | [⚙️ Configuration](docs/CONFIGURATION.md)

## Features

- **Extended Thinking** - Enhanced reasoning for deeper code analysis and security review
- **Contextual Analysis** - Understands changes in the context of the entire codebase
- **Precise Code Suggestions** - Suggests accurate code changes and improvements
- **PR Validation** - Automatically skips problematic PRs with helpful feedback
- **Customizable** - Configurable coding guidelines, branch filters, and file exclusions
Revu is a GitHub App that provides intelligent, context-aware code reviews for pull requests using LLMs (Anthropic Claude and OpenAI GPT). It automatically analyzes pull requests when opened or marked ready for review, offering comprehensive feedback that goes beyond simple style checks, including extended thinking capabilities for deeper code analysis, security review, and architectural assessment.

## Quick Start

### Installation

```bash
# Use the correct Node.js version
nvm use v23.7.0
# Clone the repository
git clone https://github.com/SocialGouv/revu.git

# Install dependencies
yarn install
```

### GitHub App Setup

1. Create a GitHub App at `Settings > Developer settings > GitHub Apps`
1. Configure permissions and events:

```yaml
Webhook URL: Your server URL or smee.io proxy
Permissions:
- Pull requests: Read & write
- Contents: Read
Events:
- Pull request
- Pull request review
```

1. Save your App ID, Private Key, and Webhook Secret

### Proxy User Setup

Since GitHub Apps cannot receive review requests directly, Revu uses a proxy user:

1. Create a dedicated GitHub user account (e.g., `revu-bot-reviewer`)
2. Generate a personal access token with repository access
3. Ensure the proxy user has read access to target repositories

### Environment Configuration

Create a `.env` file with the following variables:

```env
# Required
ANTHROPIC_API_KEY=your_anthropic_key
APP_ID=your_github_app_id
PRIVATE_KEY_PATH=path/to/private-key.pem
WEBHOOK_SECRET=your_webhook_secret
PROXY_REVIEWER_USERNAME=revu-bot-reviewer
PROXY_REVIEWER_TOKEN=proxy_user_token

# Optional
ANTHROPIC_MODEL=claude-sonnet-4-5-20250929
ANTHROPIC_EXTENDED_CONTEXT=true
WEBHOOK_PROXY_URL=https://smee.io/your-url
```

See [.env.example](.env.example) for an example.

## Choosing a provider (Anthropic or OpenAI)

Revu supports both Anthropic and OpenAI. You can select the provider either via `config.json` or an environment variable.

### Option 1: `config.json`

```json
{
"promptStrategy": "line-comments",
"thinkingEnabled": true,
"llmProvider": "openai"
}
```

- llmProvider: "anthropic" (default) or "openai"

### Option 2: Environment variable

You can also set the default provider via the `LLM_PROVIDER` env var:

```env
# Allowed values: anthropic | openai
LLM_PROVIDER=openai
```

Precedence rules:

1. If `config.json` sets `llmProvider`, it takes precedence over the environment.
2. Otherwise, `LLM_PROVIDER` (when valid) overrides the built-in default.
3. If neither is set, Revu defaults to `anthropic`.

Environment variables per provider:

- Anthropic (default):
- Required: ANTHROPIC_API_KEY
- Optional: ANTHROPIC_MODEL (default: claude-sonnet-4-5-20250929)
- Optional: ANTHROPIC_EXTENDED_CONTEXT=true to enable 1M context (beta API)

- OpenAI (official endpoint):
- Required: OPENAI_API_KEY
- Optional: OPENAI_MODEL (default: gpt-5)

Example OpenAI env:

```env
OPENAI_API_KEY=your_openai_key
# Optional model override
OPENAI_MODEL=gpt-5
```

## Running Revu

### Local Development

```bash
# Dry-run review of a PR using the current local version of Revu
# Review a PR (dry-run mode - displays analysis without submitting)
yarn review-pr https://github.com/owner/repo/pull/123
# Submit comments to GitHub after analysis
yarn review-pr https://github.com/owner/repo/pull/123 --submit
```

### Production

```bash
# Local machine
yarn build
yarn start
```

## Usage

1. Install Revu on your GitHub repositories
2. When a PR is opened, Revu automatically adds the proxy user as a reviewer
3. Click "Request review" from the proxy user to trigger code review
4. Revu analyzes the code and posts detailed feedback

For CLI usage and testing, see [CLI Documentation](docs/cli-usage.md).

## Configuration

Revu is configurable through a `.revu.yml` file in your repository root:

```yaml
# Enable Extended Thinking
thinkingEnabled: true

# Custom coding guidelines
codingGuidelines:
- 'Use descriptive variable names'
- 'Add comments for complex logic'

# PR validation rules
validation:
maxFilesChanged: 75
maxDiffSize: 15000

# Branch filtering
branches:
patterns:
- '!**'
- 'main'
- 'release/*'
# Review a PR and submit comments to GitHub
yarn review-pr https://github.com/owner/repo/pull/123 --submit
```

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

MIT License - see LICENSE file for details.
MIT License
195 changes: 195 additions & 0 deletions docs/CONFIGURATION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
# Configuration Guide

[🏠 Home](../README.md) | [📦 Installation](INSTALLATION.md)

## Overview

Revu uses environment variables and a JSON configuration file to manage settings. The application supports multiple LLM providers (Anthropic Claude and OpenAI GPT) and requires GitHub App authentication for webhook integration and API access.

Configuration is handled in two ways:
1. **Environment Variables** (`.env` file) - For sensitive credentials and deployment-specific settings
2. **Configuration File** (`config.json`) - For application behavior settings like prompt strategy and LLM provider selection

## Environment Variables

### Required Variables

The following environment variables are required for Revu to function:

| Variable | Description | Example Value |
| -------- | ----------- | ------------- |
| `ANTHROPIC_API_KEY` | Your Anthropic API key for Claude (required when `llmProvider` is `anthropic` or not set) | `sk-ant-...` |
| `APP_ID` | GitHub App ID for authentication | `123456` |
| `PRIVATE_KEY` or `PRIVATE_KEY_PATH` | GitHub App private key. Use `PRIVATE_KEY_PATH` to point to a `.pem` file, or `PRIVATE_KEY` to provide the key directly (with `\n` for line breaks) or as base64-encoded | `./github-app.pem` or `-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----` |
| `WEBHOOK_SECRET` | GitHub webhook secret for validating incoming webhook payloads | `your_webhook_secret` |

### Optional Variables

| Variable | Description | Default Value | Example Value |
| -------- | ----------- | ------------- | ------------- |
| `ANTHROPIC_MODEL` | Anthropic model to use for code reviews | `claude-sonnet-4-5-20250929` | `claude-sonnet-4-5-20250929` |
| `ANTHROPIC_EXTENDED_CONTEXT` | Enable 1M token context window (opt-out: enabled by default) | `true` | `false` (to disable) |
| `LLM_PROVIDER` | LLM provider selection (overrides `config.json` if not explicitly set there) | `anthropic` | `openai` |
| `OPENAI_MODEL` | OpenAI model to use for code reviews | `gpt-5` | `gpt-4o` |
| `OPENAI_API_KEY` | OpenAI API key (required when `llmProvider` is set to `openai`) | `sk-...` |
| `WEBHOOK_PROXY_URL` | Webhook proxy URL for local development (e.g., smee.io) | None | `https://smee.io/your-channel` |
| `PROXY_REVIEWER_USERNAME` | Username for manual review requests via comments | None | `bot-user` |
| `PROXY_REVIEWER_TOKEN` | GitHub token for proxy reviewer user | None | `ghp_...` |
| `HOST` | Server host address | `0.0.0.0` | `127.0.0.1` |
| `PORT` | Server port | `3000` | `8080` |
| `GIT_PATH` | Full path to git executable (security: prevents PATH manipulation attacks) | `/usr/bin/git` | `/usr/local/bin/git` |

## Configuration Files

### Main Configuration File

**Location**: `config.json` (project root)

**Format**: JSON

The `config.json` file controls application behavior and can be used to set the LLM provider, enable thinking mode, and select prompt strategies.

```json
{
"promptStrategy": "line-comments",
"thinkingEnabled": true,
"llmProvider": "anthropic"
}
```

**Available Options:**

- **`promptStrategy`** (string, required): The prompt strategy to use for code review
- Default: `"line-comments"`
- Currently supported: `"line-comments"`

- **`thinkingEnabled`** (boolean, optional): Enable Anthropic's extended thinking capabilities or OpenAI's adjusted temperature/instructions for deeper analysis
- Default: `false`
- When `true`: Enables chain-of-thought reasoning for more thorough code analysis

- **`llmProvider`** (string, optional): LLM provider to use for analysis
- Default: `"anthropic"`
- Allowed values: `"anthropic"` | `"openai"`
- Note: This can be overridden by the `LLM_PROVIDER` environment variable if not explicitly set in `config.json`

### Configuration Precedence

For `llmProvider` specifically:
1. If `llmProvider` is explicitly set in `config.json`, it takes precedence
2. Otherwise, if `LLM_PROVIDER` environment variable is set, it will be used
3. If neither is set, defaults to `"anthropic"`

### Additional Configuration Files

#### `.env.example`

**Location**: `.env.example` (project root)

**Purpose**: Template file showing all available environment variables with example values. Copy this to `.env` and fill in your actual values.

```bash
cp .env.example .env
```

## Configuration Examples

### Minimal Configuration

The absolute minimum configuration needed to run with Anthropic (default):

**`.env` file:**
```bash
# Anthropic Configuration (default provider)
ANTHROPIC_API_KEY=sk-ant-your-api-key-here

# GitHub App Configuration
APP_ID=123456
PRIVATE_KEY_PATH=./github-app.pem
WEBHOOK_SECRET=your_webhook_secret
```

**`config.json` file:**
```json
{
"promptStrategy": "line-comments"
}
```

### Development Configuration

Typical development setup with all common options:

**`.env` file:**
```bash
# Anthropic API Key
ANTHROPIC_API_KEY=sk-ant-your-api-key-here

# Anthropic Model Configuration
ANTHROPIC_MODEL=claude-sonnet-4-5-20250929

# Enable 1M token context window (opt-out: enabled by default)
ANTHROPIC_EXTENDED_CONTEXT=true

# GitHub App Configuration
APP_ID=123456
PRIVATE_KEY_PATH=./github-app.pem
WEBHOOK_SECRET=your_webhook_secret

# Optional: Webhook Proxy URL for local development
WEBHOOK_PROXY_URL=https://smee.io/your-channel

# Proxy User Configuration (for manual review requests)
PROXY_REVIEWER_USERNAME=bot-user
PROXY_REVIEWER_TOKEN=ghp_your_token_here

# Server Configuration
HOST=0.0.0.0
PORT=3000
```

**`config.json` file:**
```json
{
"promptStrategy": "line-comments",
"thinkingEnabled": true,
"llmProvider": "anthropic"
}
```

### OpenAI Configuration

Configuration for using OpenAI instead of Anthropic:

**`.env` file:**
```bash
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-key-here
OPENAI_MODEL=gpt-5

# LLM Provider Selection (can also be set in config.json)
LLM_PROVIDER=openai

# GitHub App Configuration
APP_ID=123456
PRIVATE_KEY_PATH=./github-app.pem
WEBHOOK_SECRET=your_webhook_secret

# Server Configuration
HOST=0.0.0.0
PORT=3000
```

**`config.json` file:**
```json
{
"promptStrategy": "line-comments",
"thinkingEnabled": false,
"llmProvider": "openai"
}
```

## Next Steps

After configuring the application:

- [📦 Installation](INSTALLATION.md) - Review installation steps
Loading