A comprehensive Python-based API automation testing framework for microservices testing using pytest. This framework provides reusable utilities, dynamic payload management, and extensive reporting capabilities.
- Overview
- Project Structure
- Architecture
- Prerequisites
- Setup and Installation
- Configuration
- Services Covered
- Writing Tests
- Running Tests
- Reporting
- Utilities Documentation
- Best Practices
- Troubleshooting
This framework is designed to test multiple microservices with a focus on:
- Modularity: Reusable utilities for authentication, API calls, and data management
- Maintainability: Separation of test logic, payloads, and configuration
- Extensibility: Easy addition of new services and test cases
- Reporting: Multiple reporting formats (HTML, Allure)
- Configuration Management: Environment-based configuration using
.envfiles
Console-API-Automation/
├── tests/ # Test modules
│ ├── test_campaign_service.py # Campaign service E2E tests
│ └── test_search_services.py # Search API tests (Campaign, Project, Facility, Staff)
├── utils/ # Utility modules
│ ├── api_client.py # HTTP client wrapper
│ ├── auth.py # Authentication token management
│ ├── config.py # Configuration loader
│ ├── data_loader.py # Payload loader with dynamic dates
│ ├── request_info.py # Request metadata builder
│ └── search_helpers.py # Common search operations
├── payloads/ # JSON payload templates
│ └── campaign/ # Campaign service payloads
│ ├── create_setup.json # Initial campaign setup
│ ├── update_boundary.json # Add boundary information
│ ├── update_delivery.json # Add delivery rules
│ ├── update_files.json # Add resource files
│ ├── create_campaign.json # Finalize campaign creation
│ ├── search_campaign.json # Search campaigns
│ ├── search_project.json # Search projects by campaign
│ ├── search_project_facility.json # Search project facilities
│ └── search_project_staff.json # Search project staff
├── data/ # Test data
│ ├── inputs.json # Test input data
│ └── outputs/ # Test outputs
│ └── campaign_ids.json # Generated campaign IDs
├── reports/ # Test reports
│ ├── report.html # Pytest HTML report
│ ├── dashboard.html # Dashboard template
│ └── campaign_dashboard.html # Generated campaign dashboard
├── generate_dashboard.py # Dashboard generator script
├── .env # Environment configuration
├── pytest.ini # Pytest configuration
├── requirements.txt # Python dependencies
└── README.md # This file
-
API Client Layer (
utils/api_client.py)- Abstraction over HTTP requests
- Automatic authentication header injection
- Support for GET, POST, PUT, DELETE methods
-
Authentication Module (
utils/auth.py)- OAuth2 token acquisition
- Token caching per service
-
Configuration Management (
utils/config.py)- Centralized environment variable loading
- Reusable search parameters
- Service-specific configurations
-
Payload Management (
utils/data_loader.py)- Dynamic JSON payload loading
- Template-based payload structure
-
Request Metadata (
utils/request_info.py)- Standardized RequestInfo object creation
- API metadata and user context
-
Search Helpers (
utils/search_helpers.py)- Generic search functionality
- ID extraction from output files
- Reusable across multiple services
Test Execution
↓
Authentication (get_auth_token)
↓
API Client Initialization
↓
Load Payload Template (data_loader)
↓
Inject Dynamic Data (UUID, IDs, etc.)
↓
Add RequestInfo
↓
API Call (via APIClient)
↓
Validate Response (assertions)
↓
Store IDs/Data (output files)
↓
Generate Reports
- Python: 3.8 or higher
- pip: Python package manager
- Virtual Environment: Recommended for dependency isolation
- Git: For version control
git clone <repository-url>
cd api_automation_projectpython3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install python-dotenv requests pytest pytest-html pytest-metadata allure-pytestCreate or update .env file with your environment-specific values:
BASE_URL=https://your-api-server.com
USERNAME=your_username
PASSWORD=your_password
TENANTID=your_tenant
LOCALE=en_MZ
USERTYPE=EMPLOYEE
CLIENT_AUTH_HEADER=Basic <base64_encoded_credentials>
SEARCH_LIMIT=200
SEARCH_OFFSET=0
HIERARCHYTYPE=MICROPLAN
BOUNDARY_TYPE=LOCALITY
BOUNDARY_CODE=your_boundary_codepytest tests/ -v| Variable | Description | Example |
|---|---|---|
BASE_URL |
API base URL | https://hcm-demo.digit.org |
USERNAME |
API username | LNMZ |
PASSWORD |
API password | eGov@1234 |
TENANTID |
Tenant identifier | mz |
LOCALE |
Locale setting | en_MZ |
USERTYPE |
User type | EMPLOYEE |
CLIENT_AUTH_HEADER |
Basic auth header for OAuth | Basic ZWdvdi11c2VyLWNsaWVudDo= |
SEARCH_LIMIT |
Default search limit | 200 |
SEARCH_OFFSET |
Default search offset | 0 |
HIERARCHYTYPE |
Boundary hierarchy type | MICROPLAN |
BOUNDARY_TYPE |
Boundary type | LOCALITY |
BOUNDARY_CODE |
Boundary code | MICROPLAN_MO_13_03_02_03_02_TUGLOR |
SERVICE_PROJECT |
Project service endpoint | /project/v1 |
SERVICE_PROJECT_FACILITY |
Project facility endpoint | /project/facility/v1 |
SERVICE_PROJECT_STAFF |
Project staff endpoint | /project/staff/v1 |
SERVICE_PROJECT_FACTORY |
Project factory endpoint | /project-factory/v1/project-type |
[pytest]
pythonpath = .This ensures the root directory is in the Python path for imports.
| Service | Operations | Test File |
|---|---|---|
| Campaign | Create Setup, Update Boundary, Update Delivery, Update Files, Create Campaign | test_campaign_service.py |
| Search Services | Search Campaign, Search Project, Search Project Facility, Search Project Staff | test_search_services.py |
Total: 2 Test Files, 9 Payload Templates
TestCampaignSetup- Campaign setup creation testsTestCampaignBoundary- Campaign boundary update testsTestCampaignDelivery- Campaign delivery rules testsTestCampaignCreate- Campaign finalization testsTestCampaignSearch- Basic campaign search testsTestCampaignE2E- End-to-end campaign workflow
TestCampaignSearchService- Campaign search API tests (4 tests)TestProjectSearchService- Project search API tests (5 tests)TestProjectFacilitySearchService- Project facility search tests (5 tests)TestProjectStaffSearchService- Project staff search tests (5 tests)TestSearchServicesE2E- End-to-end search flow test
The campaign service tests follow a multi-step workflow:
- Create Setup - Initialize campaign with basic details
- Update Boundary - Add boundary/hierarchy information
- Update Delivery - Configure delivery rules and cycles
- Update Files - Attach resource files (users, facilities, boundaries)
- Create Campaign - Finalize and activate the campaign
- Search Campaign - Verify campaign was created successfully
- Search Project - Find projects by campaign number (referenceID)
- Search Project Facility - Find facilities assigned to projects
- Search Project Staff - Find staff assigned to projects
Each test module follows this pattern:
# 1. Imports
from utils.api_client import APIClient
from utils.auth import get_auth_token
from utils.data_loader import load_payload, apply_dynamic_dates
from utils.request_info import get_request_info
from utils.config import tenantId, locale
# 2. Test Functions (with assertions)
def test_create_campaign():
"""Test case with assertions"""
token = get_auth_token("user")
client = APIClient(token=token)
response = create_campaign_setup(token, client)
# Assertions
assert response.status_code in [200, 202], f"Failed: {response.text}"
campaign_id = response.json()["CampaignDetails"]["id"]
assert campaign_id, "Campaign ID not generated"
# Store ID for later use
with open("data/outputs/campaign_ids.json", "w") as f:
json.dump({"campaignId": campaign_id}, f)
# 3. Helper Functions (reusable, no assertions)
def create_campaign_setup(token, client):
"""Helper function for campaign creation"""
payload = load_payload("campaign", "create_setup.json")
payload = apply_dynamic_dates(payload) # Set future dates
# Inject dynamic data
payload["RequestInfo"] = get_request_info(token)
payload["CampaignDetails"]["tenantId"] = tenantId
payload["CampaignDetails"]["locale"] = locale
return client.post("/project-factory/v1/project-type/create", payload)- Separation of Concerns: Test functions contain assertions; helper functions contain reusable logic
- Token Reuse: Obtain token once per test, reuse across operations
- Dynamic Data Injection: Use UUID for unique identifiers, extract IDs from output files for dependencies
- Status Code Flexibility: Accept both 200 (OK) and 202 (Accepted)
- Detailed Error Messages: Include response text in assertion failures
-
Create Payload Directory:
mkdir payloads/new_service
-
Add Payload Templates:
# Create JSON files for create, search operations touch payloads/new_service/create_entity.json touch payloads/new_service/search_entity.json -
Create Test File:
touch tests/test_new_service.py
-
Implement Tests:
from utils.api_client import APIClient from utils.auth import get_auth_token from utils.data_loader import load_payload, apply_dynamic_dates from utils.request_info import get_request_info from utils.config import tenantId, locale import uuid def test_create_new_entity(): token = get_auth_token("user") client = APIClient(token=token) response = create_new_entity(token, client) assert response.status_code in [200, 202] def create_new_entity(token, client): payload = load_payload("new_service", "create_entity.json") payload = apply_dynamic_dates(payload) # If payload has date fields payload["Entity"]["clientReferenceId"] = str(uuid.uuid4()) payload["RequestInfo"] = get_request_info(token) return client.post("/new-service/v1/_create", payload)
# Activate virtual environment
source venv/bin/activate
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_campaign_service.py
pytest tests/test_search_services.py
# Run specific test class
pytest tests/test_search_services.py::TestProjectSearchService -v
pytest tests/test_search_services.py::TestProjectFacilitySearchService -v
# Run specific test function
pytest tests/test_campaign_service.py::TestCampaignE2E::test_complete_campaign_workflow
pytest tests/test_search_services.py::TestSearchServicesE2E::test_complete_search_flow
# Run with verbose output
pytest tests/ -v
# Run with print statements visible
pytest tests/ -spytest tests/ --html=reports/report.html --self-contained-htmlThe HTML report will be generated at reports/report.html with:
- Test results summary
- Pass/Fail status
- Execution time
- Error details
# Generate Allure results
pytest --alluredir=allure-results
# Generate Allure report
allure generate allure-results --clean -o allure-report
# Open Allure report in browser
allure open allure-reportrm -f data/outputs/campaign_ids.json && pytest tests/ --html=reports/report.html --self-contained-htmlThis removes the previous campaign IDs file before running tests, ensuring a clean test run.
- data/outputs/campaign_ids.json
- Stores campaign details created during test execution
- JSON format with comprehensive campaign data:
campaignId,campaignNumber,campaignNametotalCount- Total projects createdprojectsByBoundaryType- Project IDs grouped by boundary typefacilityCount- Total facilities assignedfacilityIds- List of facility IDsstaffCount- Total staff assignedstaffIds- List of staff IDs
-
HTML Report (
reports/report.html)- Self-contained HTML file
- Summary dashboard with pass/fail counts
- Detailed test results with error traces
-
Allure Report (
allure-report/)- Rich, interactive web-based report
- Test execution trends
- Test categorization and filtering
- Detailed logs and attachments
-
Campaign Dashboard (
reports/campaign_dashboard.html)- Visual dashboard showing campaign test results
- Displays campaign details, projects, facilities, and staff
- Auto-generated from test output data
The framework includes a visual dashboard to display campaign test results.
# Generate dashboard from test output
python3 generate_dashboard.py# Open dashboard in default browser
xdg-open reports/campaign_dashboard.html
# Or use Python HTTP server
cd reports && python3 -m http.server 8080
# Then open http://localhost:8080/campaign_dashboard.htmlThe dashboard displays:
| Section | Description |
|---|---|
| Stats Cards | Campaign count, Projects, Facilities, Staff, Boundary Types |
| Campaign Details | Campaign ID, Number, Name, Status |
| Projects by Boundary | Project IDs grouped by boundary type (COUNTRY, PROVINCE, DISTRICT, etc.) |
| Facilities | List of all facility IDs |
| Staff | List of all staff IDs |
# Run tests and regenerate dashboard
pytest tests/test_campaign_service.py -v && python3 generate_dashboard.py
# Open updated dashboard
xdg-open reports/campaign_dashboard.htmlClass: APIClient
HTTP client wrapper with automatic authentication.
from utils.api_client import APIClient
# Initialize with token
client = APIClient(token="your_token_here")
# Make requests
response = client.get("/endpoint")
response = client.post("/endpoint", payload)
response = client.put("/endpoint", payload)
response = client.delete("/endpoint")Constructor Parameters:
service(optional): Service name to fetch token fortoken(optional): Direct token value- Must provide either
serviceortoken
Methods:
get(endpoint, params=None): GET requestpost(endpoint, data=None): POST requestput(endpoint, data=None): PUT requestdelete(endpoint): DELETE request
Function: get_auth_token(service)
Obtains OAuth2 access token for a service.
from utils.auth import get_auth_token
token = get_auth_token("user")Parameters:
service(str): Service name (e.g., "user", "individual")
Returns:
str: Access token
Raises:
Exception: If authentication fails
Configuration module with environment variables.
from utils.config import BASE_URL, tenantId, locale, search_params
# Use configuration values
url = BASE_URL
tenant = tenantId
loc = locale # e.g., "en_MZ"
params = search_params # Contains limit, offset, tenantIdAvailable Variables:
BASE_URL: API base URLtenantId: Tenant identifierlocale: Locale setting (e.g.,en_IN)search_limit,search_offset: Pagination settingssearch_params: Dictionary with limit, offset, tenantIdhierarchyType,boundaryCode,boundaryType: Boundary configs
Function: load_payload(service_name, filename)
Loads JSON payload template.
from utils.data_loader import load_payload
payload = load_payload("campaign", "create_setup.json")Parameters:
service_name(str): Service folder name underpayloads/filename(str): JSON file name
Returns:
dict: Parsed JSON payload
Function: apply_dynamic_dates(payload)
Applies dynamic future dates to campaign payloads, preventing test failures from expired dates.
from utils.data_loader import load_payload, apply_dynamic_dates
payload = load_payload("campaign", "create_setup.json")
payload = apply_dynamic_dates(payload) # Sets dates to tomorrow -> one month laterParameters:
payload(dict): Campaign payload dictionary
Returns:
dict: Payload with updated date fields:startDate: Tomorrow at midnight (Unix timestamp ms)endDate: One month after tomorrow (Unix timestamp ms)- Cycle dates in
deliveryRules - ISO dates in
additionalDetails.cycleData
Helper Functions:
get_tomorrow_timestamp(): Returns tomorrow at midnight as Unix timestamp (ms)get_one_month_later_timestamp(): Returns one month after tomorrow as Unix timestamp (ms)get_tomorrow_iso(): Returns tomorrow in ISO format (YYYY-MM-DDTHH:MM:SS.000Z)get_one_month_later_iso(): Returns one month after tomorrow in ISO format
Function: get_request_info(token)
Creates standardized RequestInfo object.
from utils.request_info import get_request_info
request_info = get_request_info(token)
payload["RequestInfo"] = request_infoParameters:
token(str): Authentication token
Returns:
dict: RequestInfo object with API metadata, user context, and authentication
- Each test should be independent and not rely on execution order
- Use output files for sharing data between tests that must run sequentially
- Clean up test data when possible
- Always include response text in assertion messages for debugging
- Use try-except blocks for critical operations
- Log errors to output files
- Keep payloads as templates with minimal hardcoded values
- Inject dynamic data (UUIDs, IDs) at runtime
- Reuse payloads across similar tests
- Extract common operations into helper functions
- Use utility modules for shared functionality
- Follow DRY (Don't Repeat Yourself) principle
- Add docstrings to test functions and helpers
- Comment complex logic
- Keep README updated with new services/features
- Commit frequently with meaningful messages
- Use feature branches for new services
- Keep
.envfile out of version control (add to.gitignore)
-
Authentication Failure
- Verify
.envcredentials are correct - Check CLIENT_AUTH_HEADER is properly base64 encoded
- Ensure token hasn't expired
- Verify
-
Import Errors
- Verify virtual environment is activated
- Check
pytest.inihaspythonpath = . - Install all required dependencies
-
Test Failures
- Check API endpoint availability
- Verify payload structure matches API requirements
- Check
data/outputs/campaign_ids.jsonfor created campaign details
-
Date-Related Failures
- Campaign dates must be in the future
- Use
apply_dynamic_dates()to auto-set valid dates
Last Updated: 2025-12-20