This document outlines the testing standards and practices for all Bayat projects. Following these guidelines ensures consistent quality, reduces defects, and improves maintainability across all applications.
- Testing Philosophy
- Test Coverage Requirements
- Types of Testing
- Testing Tools
- Test Naming and Organization
- Test-Driven Development (TDD)
- Mocking and Test Doubles
- Testing in CI/CD Pipeline
- Testing Legacy Code
- Accessibility Testing
At Bayat, we believe that testing is not just a phase but an integral part of the development process. Our testing philosophy is based on the following principles:
- Shift Left: Testing should begin as early as possible in the development lifecycle.
- Quality is Everyone's Responsibility: All team members, not just QA, are responsible for quality.
- Automation First: Automate tests wherever possible to ensure consistency and enable continuous testing.
- Risk-Based Approach: Focus testing efforts based on risk assessment and business impact.
- Continuous Improvement: Regularly review and refine testing processes based on project outcomes.
Different project types and components have different coverage requirements:
Project Type | Minimum Unit Test Coverage | Integration Test Requirement | E2E Test Requirement |
---|---|---|---|
Critical Systems | 90% | Comprehensive | Comprehensive |
Business Applications | 80% | Key flows | Critical paths |
Internal Tools | 70% | Main functions | Basic workflows |
Libraries/SDKs | 90% | API boundaries | Sample use cases |
UI Components | 80% | Component integration | Visual regression |
Games | 75% | Core mechanics | Player journeys |
Coverage should be measured with appropriate tools and reported as part of the CI/CD pipeline.
Unit tests verify individual components or functions in isolation.
Standards:
- Each public method should have at least one test
- Tests should be independent and not rely on external state
- Use mocks or stubs for dependencies
- Focus on behavior, not implementation details
- Test both happy paths and edge cases
Example (JavaScript/Jest):
// Function to test
function calculateTotal(items, taxRate) {
const subtotal = items.reduce((sum, item) => sum + item.price, 0);
return subtotal + (subtotal * taxRate);
}
// Unit test
describe('calculateTotal', () => {
test('calculates total with tax correctly', () => {
const items = [{ price: 10 }, { price: 20 }];
const taxRate = 0.1; // 10% tax
expect(calculateTotal(items, taxRate)).toBe(33); // 30 + 3 tax
});
test('returns zero for empty items', () => {
expect(calculateTotal([], 0.1)).toBe(0);
});
test('handles zero tax rate', () => {
const items = [{ price: 10 }, { price: 20 }];
expect(calculateTotal(items, 0)).toBe(30);
});
});
Integration tests verify that multiple components work together correctly.
Standards:
- Test interaction between components
- Focus on data flow and communication
- Include database or API integration
- Test error handling and recovery
- Use realistic test data
Example (C#/xUnit):
public class OrderServiceIntegrationTests
{
private readonly TestDbContext _dbContext;
private readonly OrderService _orderService;
private readonly PaymentService _paymentService;
public OrderServiceIntegrationTests()
{
// Setup integration test environment
_dbContext = new TestDbContext();
_paymentService = new PaymentService();
_orderService = new OrderService(_dbContext, _paymentService);
}
[Fact]
public async Task CreateOrder_WithValidData_ShouldCreateOrderAndProcessPayment()
{
// Arrange
var orderRequest = new OrderRequest
{
CustomerId = Guid.NewGuid(),
Items = new List<OrderItem>
{
new OrderItem { ProductId = "PROD-1", Quantity = 2 }
}
};
// Act
var result = await _orderService.CreateOrderAsync(orderRequest);
// Assert
Assert.NotNull(result);
Assert.NotEqual(Guid.Empty, result.OrderId);
// Verify order was saved in database
var savedOrder = await _dbContext.Orders.FindAsync(result.OrderId);
Assert.NotNull(savedOrder);
Assert.Equal(orderRequest.CustomerId, savedOrder.CustomerId);
// Verify payment was processed
Assert.True(result.PaymentProcessed);
}
}
E2E tests verify the entire application works as expected from the user's perspective.
Standards:
- Test complete user flows and critical paths
- Use realistic environments that mirror production
- Include UI/UX elements when applicable
- Automate tests for regression testing
- Keep E2E test suites focused and maintainable
Example (Cypress):
describe('User Authentication Flow', () => {
it('should allow a user to sign up, login, and access their profile', () => {
// Sign up
cy.visit('/signup');
cy.get('[data-testid=signup-email]').type('[email protected]');
cy.get('[data-testid=signup-password]').type('securePassword123');
cy.get('[data-testid=signup-confirm]').type('securePassword123');
cy.get('[data-testid=signup-button]').click();
cy.url().should('include', '/dashboard');
// Log out
cy.get('[data-testid=logout-button]').click();
cy.url().should('include', '/login');
// Log in
cy.get('[data-testid=login-email]').type('[email protected]');
cy.get('[data-testid=login-password]').type('securePassword123');
cy.get('[data-testid=login-button]').click();
// Verify access to profile
cy.get('[data-testid=nav-profile]').click();
cy.url().should('include', '/profile');
cy.get('[data-testid=profile-email]').should('contain', '[email protected]');
});
});
Performance tests evaluate the responsiveness, stability, and scalability of applications.
Standards:
- Define clear performance benchmarks
- Test under various load conditions
- Measure response times, throughput, and resource utilization
- Identify bottlenecks and performance degradation
- Automate performance tests for regular execution
Key Performance Metrics:
- Response time (average, percentiles)
- Throughput (requests per second)
- Error rate
- CPU/memory utilization
- Database query performance
- Network latency
Tools:
- JMeter
- k6
- Gatling
- LoadRunner
- New Relic
Security testing identifies vulnerabilities and ensures applications meet security requirements.
Standards:
- Integrate security scanning into the CI/CD pipeline (CI/CD Standards, DevSecOps Practices)
- Perform regular penetration testing
- Scan for common vulnerabilities (OWASP Top 10)
- Conduct code reviews with security focus (Code Review Standards)
- Test authentication and authorization mechanisms (Authentication Standards)
Approved testing tools by category:
Unit Testing:
- JavaScript/TypeScript: Jest, Mocha
- Python: pytest, unittest
- C#: xUnit, NUnit, MSTest
- C++: Google Test, Catch2
- Java: JUnit, TestNG
Integration Testing:
- REST API: Postman, REST Assured
- Database: TestContainers
- GraphQL: Apollo Client Testing
End-to-End Testing:
- Web: Cypress, Playwright, Selenium
- Mobile: Appium, Detox
- API: Postman, REST Assured
Performance Testing:
- k6
- JMeter
- Gatling
Security Testing:
- OWASP ZAP
- SonarQube
- Snyk
- Checkmarx
Tests should be organized and named to clearly indicate what they test:
Naming Convention:
- Test file:
[Component/Class]Tests.cs
or[Component/Class].test.js
- Test class:
[Component/Class]Tests
- Test method:
[MethodUnderTest]_[Scenario]_[ExpectedResult]
Example:
// OrderServiceTests.cs
public class OrderServiceTests
{
[Fact]
public void CalculateTotal_WithDiscountCode_ShouldApplyDiscount()
{
// Test implementation
}
[Fact]
public void CalculateTotal_WithInvalidDiscountCode_ShouldThrowException()
{
// Test implementation
}
}
Organization:
- Group tests by feature or component
- Maintain parallel structure between code and tests
- Keep unit tests separate from integration tests
- Use test categories or tags for filtering
TDD is encouraged for all new feature development following the Red-Green-Refactor cycle:
- Red: Write a failing test for the intended behavior
- Green: Write the minimal code to make the test pass
- Refactor: Improve the code while keeping tests passing
Benefits of TDD:
- Ensures testable code design
- Provides clear acceptance criteria
- Reduces debugging time
- Creates documentation through tests
- Prevents regression
Guidelines for effective use of test doubles:
- Use mocks for external dependencies (services, APIs)
- Use stubs for providing test data
- Use fakes for simulating complex behaviors
- Mock at the boundary of your system
- Avoid excessive mocking that couples tests to implementation
Example (Jest):
// Service to test
class UserService {
constructor(authClient, databaseClient) {
this.authClient = authClient;
this.databaseClient = databaseClient;
}
async getUserProfile(userId) {
if (!await this.authClient.isAuthenticated()) {
throw new Error('Not authenticated');
}
return this.databaseClient.getUser(userId);
}
}
// Test with mocks
test('getUserProfile returns user data for authenticated users', async () => {
// Create mocks
const mockAuthClient = {
isAuthenticated: jest.fn().mockResolvedValue(true)
};
const mockUser = { id: '123', name: 'Test User' };
const mockDbClient = {
getUser: jest.fn().mockResolvedValue(mockUser)
};
// Create service with mocks
const userService = new UserService(mockAuthClient, mockDbClient);
// Call method
const result = await userService.getUserProfile('123');
// Verify interactions and result
expect(mockAuthClient.isAuthenticated).toHaveBeenCalled();
expect(mockDbClient.getUser).toHaveBeenCalledWith('123');
expect(result).toEqual(mockUser);
});
Testing should be fully integrated into the CI/CD pipeline:
-
Pull Request Stage:
- Run unit tests and linting for quick feedback
- Run integration tests for affected components
- Generate test coverage reports
-
Main Branch Stage:
- Run all tests (unit, integration, e2e)
- Run performance tests
- Run security scans
-
Deployment Stage:
- Run smoke tests post-deployment
- Conduct canary testing
Test Failure Handling:
- Tests must pass before code can be merged
- Failed tests block deployment
- Test failures should be fixed with highest priority
Approach for testing legacy code without existing tests:
- Identify critical paths and high-risk areas
- Add characterization tests to document current behavior
- Refactor incrementally to improve testability
- Add unit tests as code is modified
- Target 70% coverage for legacy systems over time
All user-facing applications must undergo accessibility testing:
- Test with screen readers (NVDA, JAWS, VoiceOver)
- Verify keyboard navigation
- Ensure proper contrast ratios
- Check ARIA attributes and semantic HTML
- Validate against WCAG 2.1 AA standards
Automated Tools:
- axe-core
- Lighthouse
- WAVE
Manual Testing:
- Screen reader usability testing
- Keyboard navigation verification
- Color contrast analysis
Version | Date | Description |
---|---|---|
1.0 | 2025-03-20 | Initial version |