Skip to content

Latest commit

 

History

History
445 lines (335 loc) · 12.6 KB

testing.md

File metadata and controls

445 lines (335 loc) · 12.6 KB

Testing Standards

This document outlines the testing standards and practices for all Bayat projects. Following these guidelines ensures consistent quality, reduces defects, and improves maintainability across all applications.

Table of Contents

Testing Philosophy

At Bayat, we believe that testing is not just a phase but an integral part of the development process. Our testing philosophy is based on the following principles:

  1. Shift Left: Testing should begin as early as possible in the development lifecycle.
  2. Quality is Everyone's Responsibility: All team members, not just QA, are responsible for quality.
  3. Automation First: Automate tests wherever possible to ensure consistency and enable continuous testing.
  4. Risk-Based Approach: Focus testing efforts based on risk assessment and business impact.
  5. Continuous Improvement: Regularly review and refine testing processes based on project outcomes.

Test Coverage Requirements

Different project types and components have different coverage requirements:

Project Type Minimum Unit Test Coverage Integration Test Requirement E2E Test Requirement
Critical Systems 90% Comprehensive Comprehensive
Business Applications 80% Key flows Critical paths
Internal Tools 70% Main functions Basic workflows
Libraries/SDKs 90% API boundaries Sample use cases
UI Components 80% Component integration Visual regression
Games 75% Core mechanics Player journeys

Coverage should be measured with appropriate tools and reported as part of the CI/CD pipeline.

Types of Testing

Unit Testing

Unit tests verify individual components or functions in isolation.

Standards:

  • Each public method should have at least one test
  • Tests should be independent and not rely on external state
  • Use mocks or stubs for dependencies
  • Focus on behavior, not implementation details
  • Test both happy paths and edge cases

Example (JavaScript/Jest):

// Function to test
function calculateTotal(items, taxRate) {
  const subtotal = items.reduce((sum, item) => sum + item.price, 0);
  return subtotal + (subtotal * taxRate);
}

// Unit test
describe('calculateTotal', () => {
  test('calculates total with tax correctly', () => {
    const items = [{ price: 10 }, { price: 20 }];
    const taxRate = 0.1; // 10% tax
    expect(calculateTotal(items, taxRate)).toBe(33); // 30 + 3 tax
  });
  
  test('returns zero for empty items', () => {
    expect(calculateTotal([], 0.1)).toBe(0);
  });
  
  test('handles zero tax rate', () => {
    const items = [{ price: 10 }, { price: 20 }];
    expect(calculateTotal(items, 0)).toBe(30);
  });
});

Integration Testing

Integration tests verify that multiple components work together correctly.

Standards:

  • Test interaction between components
  • Focus on data flow and communication
  • Include database or API integration
  • Test error handling and recovery
  • Use realistic test data

Example (C#/xUnit):

public class OrderServiceIntegrationTests
{
    private readonly TestDbContext _dbContext;
    private readonly OrderService _orderService;
    private readonly PaymentService _paymentService;
    
    public OrderServiceIntegrationTests()
    {
        // Setup integration test environment
        _dbContext = new TestDbContext();
        _paymentService = new PaymentService();
        _orderService = new OrderService(_dbContext, _paymentService);
    }
    
    [Fact]
    public async Task CreateOrder_WithValidData_ShouldCreateOrderAndProcessPayment()
    {
        // Arrange
        var orderRequest = new OrderRequest
        {
            CustomerId = Guid.NewGuid(),
            Items = new List<OrderItem>
            {
                new OrderItem { ProductId = "PROD-1", Quantity = 2 }
            }
        };
        
        // Act
        var result = await _orderService.CreateOrderAsync(orderRequest);
        
        // Assert
        Assert.NotNull(result);
        Assert.NotEqual(Guid.Empty, result.OrderId);
        
        // Verify order was saved in database
        var savedOrder = await _dbContext.Orders.FindAsync(result.OrderId);
        Assert.NotNull(savedOrder);
        Assert.Equal(orderRequest.CustomerId, savedOrder.CustomerId);
        
        // Verify payment was processed
        Assert.True(result.PaymentProcessed);
    }
}

End-to-End Testing

E2E tests verify the entire application works as expected from the user's perspective.

Standards:

  • Test complete user flows and critical paths
  • Use realistic environments that mirror production
  • Include UI/UX elements when applicable
  • Automate tests for regression testing
  • Keep E2E test suites focused and maintainable

Example (Cypress):

describe('User Authentication Flow', () => {
  it('should allow a user to sign up, login, and access their profile', () => {
    // Sign up
    cy.visit('/signup');
    cy.get('[data-testid=signup-email]').type('[email protected]');
    cy.get('[data-testid=signup-password]').type('securePassword123');
    cy.get('[data-testid=signup-confirm]').type('securePassword123');
    cy.get('[data-testid=signup-button]').click();
    cy.url().should('include', '/dashboard');
    
    // Log out
    cy.get('[data-testid=logout-button]').click();
    cy.url().should('include', '/login');
    
    // Log in
    cy.get('[data-testid=login-email]').type('[email protected]');
    cy.get('[data-testid=login-password]').type('securePassword123');
    cy.get('[data-testid=login-button]').click();
    
    // Verify access to profile
    cy.get('[data-testid=nav-profile]').click();
    cy.url().should('include', '/profile');
    cy.get('[data-testid=profile-email]').should('contain', '[email protected]');
  });
});

Performance Testing

Performance tests evaluate the responsiveness, stability, and scalability of applications.

Standards:

  • Define clear performance benchmarks
  • Test under various load conditions
  • Measure response times, throughput, and resource utilization
  • Identify bottlenecks and performance degradation
  • Automate performance tests for regular execution

Key Performance Metrics:

  • Response time (average, percentiles)
  • Throughput (requests per second)
  • Error rate
  • CPU/memory utilization
  • Database query performance
  • Network latency

Tools:

  • JMeter
  • k6
  • Gatling
  • LoadRunner
  • New Relic

Security Testing

Security testing identifies vulnerabilities and ensures applications meet security requirements.

Standards:

Testing Tools

Approved testing tools by category:

Unit Testing:

  • JavaScript/TypeScript: Jest, Mocha
  • Python: pytest, unittest
  • C#: xUnit, NUnit, MSTest
  • C++: Google Test, Catch2
  • Java: JUnit, TestNG

Integration Testing:

  • REST API: Postman, REST Assured
  • Database: TestContainers
  • GraphQL: Apollo Client Testing

End-to-End Testing:

  • Web: Cypress, Playwright, Selenium
  • Mobile: Appium, Detox
  • API: Postman, REST Assured

Performance Testing:

  • k6
  • JMeter
  • Gatling

Security Testing:

  • OWASP ZAP
  • SonarQube
  • Snyk
  • Checkmarx

Test Naming and Organization

Tests should be organized and named to clearly indicate what they test:

Naming Convention:

  • Test file: [Component/Class]Tests.cs or [Component/Class].test.js
  • Test class: [Component/Class]Tests
  • Test method: [MethodUnderTest]_[Scenario]_[ExpectedResult]

Example:

// OrderServiceTests.cs
public class OrderServiceTests
{
    [Fact]
    public void CalculateTotal_WithDiscountCode_ShouldApplyDiscount()
    {
        // Test implementation
    }
    
    [Fact]
    public void CalculateTotal_WithInvalidDiscountCode_ShouldThrowException()
    {
        // Test implementation
    }
}

Organization:

  • Group tests by feature or component
  • Maintain parallel structure between code and tests
  • Keep unit tests separate from integration tests
  • Use test categories or tags for filtering

Test-Driven Development (TDD)

TDD is encouraged for all new feature development following the Red-Green-Refactor cycle:

  1. Red: Write a failing test for the intended behavior
  2. Green: Write the minimal code to make the test pass
  3. Refactor: Improve the code while keeping tests passing

Benefits of TDD:

  • Ensures testable code design
  • Provides clear acceptance criteria
  • Reduces debugging time
  • Creates documentation through tests
  • Prevents regression

Mocking and Test Doubles

Guidelines for effective use of test doubles:

  • Use mocks for external dependencies (services, APIs)
  • Use stubs for providing test data
  • Use fakes for simulating complex behaviors
  • Mock at the boundary of your system
  • Avoid excessive mocking that couples tests to implementation

Example (Jest):

// Service to test
class UserService {
  constructor(authClient, databaseClient) {
    this.authClient = authClient;
    this.databaseClient = databaseClient;
  }
  
  async getUserProfile(userId) {
    if (!await this.authClient.isAuthenticated()) {
      throw new Error('Not authenticated');
    }
    return this.databaseClient.getUser(userId);
  }
}

// Test with mocks
test('getUserProfile returns user data for authenticated users', async () => {
  // Create mocks
  const mockAuthClient = {
    isAuthenticated: jest.fn().mockResolvedValue(true)
  };
  
  const mockUser = { id: '123', name: 'Test User' };
  const mockDbClient = {
    getUser: jest.fn().mockResolvedValue(mockUser)
  };
  
  // Create service with mocks
  const userService = new UserService(mockAuthClient, mockDbClient);
  
  // Call method
  const result = await userService.getUserProfile('123');
  
  // Verify interactions and result
  expect(mockAuthClient.isAuthenticated).toHaveBeenCalled();
  expect(mockDbClient.getUser).toHaveBeenCalledWith('123');
  expect(result).toEqual(mockUser);
});

Testing in CI/CD Pipeline

Testing should be fully integrated into the CI/CD pipeline:

  1. Pull Request Stage:

    • Run unit tests and linting for quick feedback
    • Run integration tests for affected components
    • Generate test coverage reports
  2. Main Branch Stage:

    • Run all tests (unit, integration, e2e)
    • Run performance tests
    • Run security scans
  3. Deployment Stage:

    • Run smoke tests post-deployment
    • Conduct canary testing

Test Failure Handling:

  • Tests must pass before code can be merged
  • Failed tests block deployment
  • Test failures should be fixed with highest priority

Testing Legacy Code

Approach for testing legacy code without existing tests:

  1. Identify critical paths and high-risk areas
  2. Add characterization tests to document current behavior
  3. Refactor incrementally to improve testability
  4. Add unit tests as code is modified
  5. Target 70% coverage for legacy systems over time

Accessibility Testing

All user-facing applications must undergo accessibility testing:

  • Test with screen readers (NVDA, JAWS, VoiceOver)
  • Verify keyboard navigation
  • Ensure proper contrast ratios
  • Check ARIA attributes and semantic HTML
  • Validate against WCAG 2.1 AA standards

Automated Tools:

  • axe-core
  • Lighthouse
  • WAVE

Manual Testing:

  • Screen reader usability testing
  • Keyboard navigation verification
  • Color contrast analysis

Version History

Version Date Description
1.0 2025-03-20 Initial version