Skip to content
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
43b5be6
feat(backend): snapshot test responses
Swiftyos May 26, 2025
62227dd
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos May 26, 2025
8936a42
added snapshot testing
Swiftyos May 26, 2025
3dd2690
formatting
Swiftyos May 26, 2025
0fa20e8
fmt
Swiftyos May 26, 2025
574f4f1
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos May 26, 2025
ba0f9f3
update lock file
Swiftyos May 26, 2025
d3bb799
updated lock file
Swiftyos May 26, 2025
d8b1037
updated lock file
Swiftyos May 26, 2025
2b29f12
added testing doc to docs
Swiftyos May 26, 2025
010adf4
Move root contributing to introduciton
Swiftyos May 26, 2025
90e56bc
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos May 27, 2025
2e4ef8f
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos May 28, 2025
891e171
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos May 29, 2025
05f9050
updated snapshot testing dir
Swiftyos May 29, 2025
5c3371d
fmt
Swiftyos May 29, 2025
bc58bf8
fix(backend): Prevent test runner from wiping developer databases
Swiftyos Jun 2, 2025
f566b1c
refactor(backend): Comprehensive test improvements and code review fixes
Swiftyos Jun 2, 2025
416b4e6
remove docs
Swiftyos Jun 2, 2025
d9157c9
updated poetry lock
Swiftyos Jun 2, 2025
e386b21
fix(backend): Fix DB_PORT environment variable handling in test infra…
Swiftyos Jun 2, 2025
59ca3fc
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos Jun 2, 2025
69b3c9c
update lock file
Swiftyos Jun 2, 2025
99e74a4
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
ntindle Jun 5, 2025
1f974a0
fix: lock
ntindle Jun 5, 2025
3bf2c26
Merge branch 'dev' into swiftyos/add-pytest-snapshot-to-dev-dependencies
Swiftyos Jun 6, 2025
c8f3793
added testing docs
Swiftyos Jun 6, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 132 additions & 0 deletions autogpt_platform/CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Repository Overview

AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities

## Essential Commands

### Backend Development
```bash
# Install dependencies
cd backend && poetry install

# Run database migrations
poetry run prisma migrate dev

# Start all services (database, redis, rabbitmq)
docker compose up -d

# Run the backend server
poetry run serve

# Run tests
poetry run test

# Run specific test
poetry run pytest path/to/test_file.py::test_function_name

# Lint and format
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md

#### Creating/Updating Snapshots

When you first write a test or when the expected output changes:

```bash
poetry run pytest path/to/test.py --snapshot-update
```

⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.


### Frontend Development
```bash
# Install dependencies
cd frontend && npm install

# Start development server
npm run dev

# Run E2E tests
npm run test

# Run Storybook for component development
npm run storybook

# Build production
npm run build

# Type checking
npm run type-check
```

## Architecture Overview

### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
- **Execution Engine**: Separate executor service processes agent workflows
- **Authentication**: JWT-based with Supabase integration

### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Workflow Builder**: Visual graph editor using @xyflow/react
- **UI Components**: Radix UI primitives with Tailwind CSS styling
- **Feature Flags**: LaunchDarkly integration

### Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
4. **Store**: Marketplace for sharing agent templates

### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook

### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
- `AgentNode`: Individual nodes in a workflow
- `StoreListing`: Marketplace listings for sharing agents

### Environment Configuration
- Backend: `.env` file in `/backend`
- Frontend: `.env.local` file in `/frontend`
- Both require Supabase credentials and API keys for various services

### Common Development Tasks

**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry

**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify

**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
4. Test with Playwright if user-facing
17 changes: 17 additions & 0 deletions autogpt_platform/backend/backend/server/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
"""Common test fixtures for server tests."""

import pytest
from pytest_snapshot.plugin import Snapshot


@pytest.fixture
def configured_snapshot(snapshot: Snapshot) -> Snapshot:
"""Pre-configured snapshot fixture with standard settings."""
snapshot.snapshot_dir = "snapshots"
return snapshot


# Test ID constants
TEST_USER_ID = "test-user-id"
ADMIN_USER_ID = "admin-user-id"
TARGET_USER_ID = "target-user-id"
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
"""Example of analytics tests with improved error handling and assertions."""

import json
from unittest.mock import AsyncMock, Mock

import fastapi
import fastapi.testclient
import pytest_mock
from pytest_snapshot.plugin import Snapshot

import backend.server.routers.analytics as analytics_routes
from backend.server.conftest import TEST_USER_ID
from backend.server.test_helpers import (
assert_error_response_structure,
assert_mock_called_with_partial,
assert_response_status,
safe_parse_json,
)
from backend.server.utils import get_user_id

app = fastapi.FastAPI()
app.include_router(analytics_routes.router)

client = fastapi.testclient.TestClient(app)


def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID


app.dependency_overrides[get_user_id] = override_get_user_id


def test_log_raw_metric_success_improved(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
) -> None:
"""Test successful raw metric logging with improved assertions."""
# Mock the analytics function
mock_result = Mock(id="metric-123-uuid")

mock_log_metric = mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)

request_data = {
"metric_name": "page_load_time",
"metric_value": 2.5,
"data_string": "/dashboard",
}

response = client.post("/log_raw_metric", json=request_data)

# Improved assertions with better error messages
assert_response_status(response, 200, "Metric logging should succeed")
response_data = safe_parse_json(response, "Metric response parsing")

assert response_data == "metric-123-uuid", f"Unexpected response: {response_data}"

# Verify the function was called with correct parameters
assert_mock_called_with_partial(
mock_log_metric,
user_id=TEST_USER_ID,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",
)

# Snapshot test the response
configured_snapshot.assert_match(
json.dumps({"metric_id": response_data}, indent=2, sort_keys=True),
"analytics_log_metric_success_improved",
)


def test_log_raw_metric_invalid_request_improved() -> None:
"""Test invalid metric request with improved error assertions."""
# Test missing required fields
response = client.post("/log_raw_metric", json={})

error_data = assert_error_response_structure(
response, expected_status=422, expected_error_fields=["loc", "msg", "type"]
)

# Verify specific error details
detail = error_data["detail"]
assert isinstance(detail, list), "Error detail should be a list"
assert len(detail) > 0, "Should have at least one error"

# Check that required fields are mentioned in errors
error_fields = [error["loc"][-1] for error in detail if "loc" in error]
assert "metric_name" in error_fields, "Should report missing metric_name"
assert "metric_value" in error_fields, "Should report missing metric_value"
assert "data_string" in error_fields, "Should report missing data_string"


def test_log_raw_metric_type_validation_improved() -> None:
"""Test metric type validation with improved assertions."""
invalid_requests = [
{
"data": {
"metric_name": "test",
"metric_value": "not_a_number", # Invalid type
"data_string": "test",
},
"expected_error": "Input should be a valid number",
},
{
"data": {
"metric_name": "", # Empty string
"metric_value": 1.0,
"data_string": "test",
},
"expected_error": "String should have at least 1 character",
},
{
"data": {
"metric_name": "test",
"metric_value": float("inf"), # Infinity
"data_string": "test",
},
"expected_error": "ensure this value is finite",
},
]

for test_case in invalid_requests:
response = client.post("/log_raw_metric", json=test_case["data"])

error_data = assert_error_response_structure(response, expected_status=422)

# Check that expected error is in the response
error_text = json.dumps(error_data)
assert (
test_case["expected_error"] in error_text
or test_case["expected_error"].lower() in error_text.lower()
), f"Expected error '{test_case['expected_error']}' not found in: {error_text}"
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
"""Example of parametrized tests for analytics endpoints."""

import json
from unittest.mock import AsyncMock, Mock

import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot

import backend.server.routers.analytics as analytics_routes
from backend.server.conftest import TEST_USER_ID
from backend.server.utils import get_user_id

app = fastapi.FastAPI()
app.include_router(analytics_routes.router)

client = fastapi.testclient.TestClient(app)


def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID


app.dependency_overrides[get_user_id] = override_get_user_id


@pytest.mark.parametrize(
"metric_value,metric_name,data_string,test_id",
[
(100, "api_calls_count", "external_api", "integer_value"),
(0, "error_count", "no_errors", "zero_value"),
(-5.2, "temperature_delta", "cooling", "negative_value"),
(1.23456789, "precision_test", "float_precision", "float_precision"),
(999999999, "large_number", "max_value", "large_number"),
(0.0000001, "tiny_number", "min_value", "tiny_number"),
],
)
def test_log_raw_metric_values_parametrized(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
metric_value: float,
metric_name: str,
data_string: str,
test_id: str,
) -> None:
"""Test raw metric logging with various metric values using parametrize."""
# Mock the analytics function
mock_result = Mock(id=f"metric-{test_id}-uuid")

mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)

request_data = {
"metric_name": metric_name,
"metric_value": metric_value,
"data_string": data_string,
}

response = client.post("/log_raw_metric", json=request_data)

# Better error handling
assert response.status_code == 200, f"Failed for {test_id}: {response.text}"
response_data = response.json()

# Snapshot test the response
configured_snapshot.assert_match(
json.dumps(
{"metric_id": response_data, "test_case": test_id}, indent=2, sort_keys=True
),
f"analytics_metric_{test_id}",
)


@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"), # Missing all fields
({"metric_name": "test"}, "Field required"), # Missing metric_value
(
{"metric_name": "test", "metric_value": "not_a_number"},
"Input should be a valid number",
), # Invalid type
(
{"metric_name": "", "metric_value": 1.0, "data_string": "test"},
"String should have at least 1 character",
), # Empty name
],
)
def test_log_raw_metric_invalid_requests_parametrized(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test invalid metric requests with parametrize."""
response = client.post("/log_raw_metric", json=invalid_data)

assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail
# Verify error message contains expected error
error_text = json.dumps(error_detail)
assert expected_error in error_text or expected_error.lower() in error_text.lower()
Loading
Loading