This document describes the automated testing infrastructure for all scripts and automations in the ioBroker Copilot Instructions repository.
The testing framework provides comprehensive automated testing for all shell scripts in the scripts/ directory:
manage-versions.sh- Master version management scriptextract-version.sh- Version and date extraction utilityupdate-versions.sh- Documentation version synchronizationcheck-template-version.sh- Template version comparison tool
All test files are located in the tests/ directory:
test-runner.sh- Main test execution frameworktest-extract-version.sh- Tests for extract-version.sh functionalitytest-manage-versions.sh- Tests for manage-versions.sh commandstest-update-versions.sh- Tests for update-versions.sh operationstest-check-template-version.sh- Tests for check-template-version.sh featurestest-integration.sh- Integration tests for script interactions
Unit Tests: Test individual script functions and command-line options
- Parameter validation
- Output format verification
- Error handling
- Edge cases
Integration Tests: Test script interactions and workflows
- End-to-end version update workflows
- Cross-script dependencies
- File system operations
- Consistency validation
Error Handling Tests: Test graceful failure scenarios
- Missing files
- Invalid parameters
- Network failures (for remote checks)
- Dependency failures
# Run all tests
./tests/test-runner.sh
# Run specific test file
./tests/test-runner.sh tests/test-extract-version.sh
# Make test runner executable if needed
chmod +x tests/test-runner.shTests run automatically via GitHub Actions on:
- Push to main/develop branches
- Pull requests to main branch
- Daily schedule (2 AM UTC)
- Manual trigger via workflow_dispatch
The workflow is defined in .github/workflows/test-scripts.yml.
Each test run creates an isolated temporary directory with copies of all repository files, ensuring tests don't interfere with the actual repository or each other.
- Exit Code Validation: Tests verify expected success/failure states
- Output Pattern Matching: Tests check for specific output messages
- File State Verification: Tests validate file changes and consistency
- Dependency Checking: Tests verify script dependencies exist
- Color-coded pass/fail indicators
- Detailed error messages for failed tests
- Test summary with counts
- Failed test details for debugging
When adding new functionality to existing scripts or creating new scripts:
- Add test cases to the appropriate
test-<script-name>.shfile - Follow the existing test patterns:
run_test_with_output \ "Test description" \ "command to test" \ "expected output pattern"
- Create a new test file:
tests/test-<new-script>.sh - Follow the structure of existing test files
- Test all command-line options and error conditions
- Add integration tests to
test-integration.shif the script interacts with others
# Test with exit code validation only
run_test "test name" "command" [expected_exit_code]
# Test with output pattern validation
run_test_with_output "test name" "command" "regex_pattern" [expected_exit_code]
# Direct pass/fail reporting
print_test_result "test name" "PASS|FAIL" ["error message"]The test suite includes specific validation for version management workflows:
- Template vs README version matching
- Date synchronization verification
- Cross-file version propagation
- Full version update cycle testing
- Rollback and restoration testing
- Error recovery testing
- Network failure handling
- Malformed response handling
- Update guidance verification
- Test both success and failure scenarios
- Include edge cases and boundary conditions
- Verify error messages are helpful and actionable
- Test script behavior with missing dependencies
- Update tests when script functionality changes
- Ensure tests are deterministic and don't depend on external state
- Use meaningful test names that describe the scenario
- Group related tests logically
- Test graceful degradation when files are missing
- Verify appropriate exit codes for different error conditions
- Test recovery from inconsistent states
Test Environment Setup Failures: Ensure all scripts have execute permissions
chmod +x scripts/*.sh tests/*.shHidden File Copying: The test framework copies hidden directories (like .github) for complete testing
Network-dependent Tests: Some tests (remote version checking) may fail in network-restricted environments but include fallback scenarios
- Run specific failing test file for detailed output
- Check the error message in the test summary
- Verify script dependencies and file permissions
- Test the actual script manually with the failing scenario
Potential testing improvements:
- Performance Testing: Add timing measurements for large operations
- Stress Testing: Test with large files or many concurrent operations
- Security Testing: Validate input sanitization and path traversal protection
- Cross-Platform Testing: Test on different OS environments (Windows, macOS)
- Mutation Testing: Verify test quality by introducing intentional bugs