Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,8 @@ PROXY_REVIEWER_TOKEN=ghp_your_token_here

# App Configuration
REPOSITORY_FOLDER=/app/repos

# LLM Logging Configuration
# Controls what gets logged when exchanging with the LLM
# Options: disabled, metadata, truncated, full
LOG_LLM_EXCHANGES=metadata
6 changes: 6 additions & 0 deletions .kontinuous/env/prod/templates/app.configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,9 @@ metadata:
name: app
data:
REPOSITORY_FOLDER: "/app/repos"
LOG_LLM_EXCHANGES: "full"

#LOG_LLM_EXCHANGES=metadata # production mode
#LOG_LLM_EXCHANGES=truncated # quick review
#LOG_LLM_EXCHANGES=full # full LLM data
#LOG_LLM_EXCHANGES=disabled # no LLM logs
Comment on lines 6 to +12
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comment lines use # which is invalid YAML syntax within the data section of a ConfigMap. YAML comments must be outside of string values or the ConfigMap will fail to parse during deployment. Either remove these lines or move them outside the data section as proper YAML comments.

Suggested change
REPOSITORY_FOLDER: "/app/repos"
LOG_LLM_EXCHANGES: "full"
#LOG_LLM_EXCHANGES=metadata # production mode
#LOG_LLM_EXCHANGES=truncated # quick review
#LOG_LLM_EXCHANGES=full # full LLM data
#LOG_LLM_EXCHANGES=disabled # no LLM logs
# Configuration options for LOG_LLM_EXCHANGES:
# - metadata: production mode (logs only metadata)
# - truncated: quick review (logs metadata + truncated content)
# - full: full LLM data (logs complete prompts and responses)
# - disabled: no LLM logs
data:
REPOSITORY_FOLDER: "/app/repos"
LOG_LLM_EXCHANGES: "full"

Fix it with Roo Code or mention @roomote and request a fix.

162 changes: 162 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,6 +255,7 @@ Requirements:
| `REPOSITORY_FOLDER` | string | Absolute path where repositories will be cloned |
| `PROXY_REVIEWER_USERNAME` | string | Username of the proxy user account for manual review requests |
| `PROXY_REVIEWER_TOKEN` | string | GitHub personal access token for the proxy user account |
| `LOG_LLM_EXCHANGES` | string | (Optional) Control LLM logging level: disabled, metadata, truncated, full |

## Running the App

Expand Down Expand Up @@ -447,6 +448,167 @@ Enable debug logging:
DEBUG=revu:* yarn dev
```

## LLM Exchange Logging

Revu provides comprehensive logging of all exchanges with the LLM (Claude) to help with debugging, monitoring, and analysis. This feature is configurable and designed to balance visibility with security and performance.

### Configuration

Control LLM logging through the `LOG_LLM_EXCHANGES` environment variable:

```bash
# Available logging levels
LOG_LLM_EXCHANGES=disabled # No LLM logging
LOG_LLM_EXCHANGES=metadata # Only metadata (default)
LOG_LLM_EXCHANGES=truncated # Metadata + truncated content
LOG_LLM_EXCHANGES=full # Complete prompts and responses
```

### Logging Levels

#### `disabled`
- No LLM exchange logging
- Use for production environments with strict logging requirements

#### `metadata` (default)
- Logs request/response metadata only
- Includes: model used, duration, token usage, strategy, PR details
- **Recommended for production**: Provides insights without content exposure

#### `truncated`
- Metadata + first 500 characters of prompts/responses
- Balances debugging needs with content privacy
- **Recommended for development**: Good for debugging prompt issues

#### `full`
- Complete prompts and responses logged
- Maximum visibility for debugging
- **Security warning**: Contains full source code and sensitive data

### Log Format

All LLM logs follow a structured JSON format:

```json
{
"timestamp": "2025-01-18T16:49:00.000Z",
"service": "revu",
"level": "info",
"event_type": "llm_request_sent",
"model_used": "claude-sonnet-4-20250514",
"strategy_name": "line-comments",
"pr_number": 123,
"repository": "owner/repo"
}
```

### Event Types

- **`llm_request_sent`**: When a request is sent to Claude
- **`llm_response_received`**: When a response is received from Claude
- **`llm_request_failed`**: When an API request fails

### Metadata Fields

- `model_used`: Anthropic model used for the request
- `strategy_name`: Review strategy (e.g., "line-comments")
- `request_duration_ms`: Request duration in milliseconds
- `tokens_used`: Token usage `{input: number, output: number}`
- `pr_number`: Pull request number
- `repository`: Repository name
- `prompt_preview`: Truncated prompt (truncated/full modes)
- `response_preview`: Truncated response (truncated/full modes)
- `full_prompt`: Complete prompt (full mode only)
- `full_response`: Complete response (full mode only)

### Use Cases

#### Production Monitoring
```bash
LOG_LLM_EXCHANGES=metadata
```
- Track API usage and performance
- Monitor token consumption
- Identify slow requests or failures

#### Development Debugging
```bash
LOG_LLM_EXCHANGES=truncated
```
- Debug prompt engineering issues
- Verify request/response flow
- Analyze response quality

#### Deep Analysis
```bash
LOG_LLM_EXCHANGES=full
```
- Full content analysis
- Prompt optimization
- Response quality assessment

### Security Considerations

- **`full` mode**: Logs contain complete source code and potentially sensitive data
- **`truncated` mode**: May still contain sensitive information in previews
- **`metadata` mode**: Safe for production, contains no code content
- **Log rotation**: Ensure proper log rotation for large volumes
- **Access control**: Restrict access to logs containing sensitive data

### Performance Impact

- **`disabled`**: No performance impact
- **`metadata`**: Minimal impact (recommended)
- **`truncated`**: Low impact, slight string processing overhead
- **`full`**: Moderate impact due to large log entries

### Examples

#### Request Sent (metadata level)
```json
{
"timestamp": "2025-01-18T16:49:00.000Z",
"service": "revu",
"level": "info",
"event_type": "llm_request_sent",
"model_used": "claude-sonnet-4-20250514",
"strategy_name": "line-comments",
"pr_number": 123,
"repository": "owner/repo"
}
```

#### Response Received (metadata level)
```json
{
"timestamp": "2025-01-18T16:49:02.500Z",
"service": "revu",
"level": "info",
"event_type": "llm_response_received",
"model_used": "claude-sonnet-4-20250514",
"strategy_name": "line-comments",
"request_duration_ms": 2500,
"tokens_used": {"input": 1500, "output": 800},
"pr_number": 123,
"repository": "owner/repo"
}
```

#### Request Failed
```json
{
"timestamp": "2025-01-18T16:49:03.000Z",
"service": "revu",
"level": "error",
"event_type": "llm_request_failed",
"model_used": "claude-sonnet-4-20250514",
"strategy_name": "line-comments",
"error_message": "API rate limit exceeded",
"pr_number": 123,
"repository": "owner/repo"
}
```

## Contributing

1. **Development Setup**
Expand Down
166 changes: 166 additions & 0 deletions __tests__/llm-logging.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
import { describe, expect, it, vi, beforeEach } from 'vitest'

// Mock console.log to capture log outputs
const mockConsoleLog = vi.spyOn(console, 'log').mockImplementation(() => {})

// Mock environment variables before importing the module
vi.stubEnv('LOG_LLM_EXCHANGES', 'metadata')

import {
logLLMRequestSent,
logLLMResponseReceived,
logLLMRequestFailed
} from '../src/utils/logger.ts'

describe('LLM Logging', () => {
beforeEach(() => {
vi.clearAllMocks()
})

describe('logLLMRequestSent', () => {
it('should log request with metadata level', () => {
const prompt = 'Test prompt'
const model = 'claude-sonnet-4-20250514'
const strategyName = 'line-comments'
const context = { pr_number: 123, repository: 'test/repo' }

logLLMRequestSent(prompt, model, strategyName, context)

expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"event_type":"llm_request_sent"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"model_used":"claude-sonnet-4-20250514"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"strategy_name":"line-comments"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"pr_number":123')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"repository":"test/repo"')
)
})

it('should include prompt preview in truncated mode', () => {
const longPrompt = 'a'.repeat(1000)
const model = 'claude-sonnet-4-20250514'
const strategyName = 'line-comments'

// Mock environment to use truncated mode
vi.stubEnv('LOG_LLM_EXCHANGES', 'truncated')

logLLMRequestSent(longPrompt, model, strategyName)

expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"prompt_preview"')
)
})
})

describe('logLLMResponseReceived', () => {
it('should log response with duration and token usage', () => {
const response = '{"summary": "Test response"}'
const model = 'claude-sonnet-4-20250514'
const strategyName = 'line-comments'
const durationMs = 1500
const tokensUsed = { input: 100, output: 50 }
const context = { pr_number: 123, repository: 'test/repo' }

logLLMResponseReceived(
response,
model,
strategyName,
durationMs,
tokensUsed,
context
)

expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"event_type":"llm_response_received"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"request_duration_ms":1500')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"tokens_used":{"input":100,"output":50}')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"pr_number":123')
)
})

it('should truncate long responses in truncated mode', () => {
const longResponse = 'b'.repeat(1000)
const model = 'claude-sonnet-4-20250514'
const strategyName = 'line-comments'
const durationMs = 1500

// Mock environment to use truncated mode
vi.stubEnv('LOG_LLM_EXCHANGES', 'truncated')

logLLMResponseReceived(longResponse, model, strategyName, durationMs)

expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"response_preview"')
)
})
})

describe('logLLMRequestFailed', () => {
it('should log request failure with error details', () => {
const error = new Error('API request failed')
const model = 'claude-sonnet-4-20250514'
const strategyName = 'line-comments'
const context = { pr_number: 123, repository: 'test/repo' }

logLLMRequestFailed(error, model, strategyName, context)

expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"event_type":"llm_request_failed"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"level":"error"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"error_message":"API request failed"')
)
expect(mockConsoleLog).toHaveBeenCalledWith(
expect.stringContaining('"pr_number":123')
)
})
})

describe('Log levels', () => {
it('should not log when disabled', () => {
vi.stubEnv('LOG_LLM_EXCHANGES', 'disabled')

logLLMRequestSent('test', 'claude-sonnet-4-20250514', 'line-comments')

expect(mockConsoleLog).not.toHaveBeenCalled()
})

it('should log only metadata when in metadata mode', () => {
vi.stubEnv('LOG_LLM_EXCHANGES', 'metadata')

const prompt = 'Test prompt'
logLLMRequestSent(prompt, 'claude-sonnet-4-20250514', 'line-comments')

const logCall = mockConsoleLog.mock.calls[0][0]
expect(logCall).not.toContain('"prompt_preview"')
expect(logCall).not.toContain('"full_prompt"')
})

it('should include full content in full mode', () => {
vi.stubEnv('LOG_LLM_EXCHANGES', 'full')

const prompt = 'Test prompt'
logLLMRequestSent(prompt, 'claude-sonnet-4-20250514', 'line-comments')

const logCall = mockConsoleLog.mock.calls[0][0]
expect(logCall).toContain('"full_prompt"')
expect(logCall).toContain('"prompt_preview"')
})
})
})
Loading