Comprehensive guide to the bash-logger test suite, including how to run tests, write new tests, and understand the test framework.
- Overview
- Running Tests
- Code Coverage and Static Analysis
- Test Suite Structure
- Writing Tests
- Adding New Tests
- Debugging Failed Tests
- Continuous Integration
- Related Documentation
The bash-logger project includes a comprehensive Bash-based test suite. To see the current
number of test suites and individual tests, run bash tests/run_tests.sh and check the
summary output. The test framework is built in pure Bash and designed to be:
- Self-contained: No external test frameworks required
- CI-friendly: Clear exit codes and non-interactive
- Developer-friendly: Colored output and detailed error messages
- Maintainable: Simple structure that's easy to extend
From the project root:
cd tests
./run_tests.shOr from anywhere in the project:
./tests/run_tests.shThe test runner automatically detects available CPU cores and runs tests in parallel (capped at 8 jobs). You can override this with the -j or --parallel option:
# Auto-detected parallelism (default behavior)
./run_tests.sh
# Run with 4 parallel jobs (explicit)
./run_tests.sh -j 4
# Run with 8 parallel jobs (explicit)
./run_tests.sh -j 8
# Combine with other options
./run_tests.sh -j 8 --junitEnvironment Variable Override:
Set TEST_PARALLEL_JOBS to control parallelism without command-line flags:
# Useful for CI/CD pipelines
export TEST_PARALLEL_JOBS=4
./run_tests.sh
# Or inline
TEST_PARALLEL_JOBS=4 ./run_tests.shPerformance Impact:
- Sequential (
-j 1): ~5+ minutes for full suite - 4 parallel jobs: ~2-3 minutes
- 8 parallel jobs: ~1-2 minutes
Recommendations:
- Local development: Auto-detection works well (runs up to 8 parallel jobs)
- CI/CD: Set
TEST_PARALLEL_JOBS=4for GitHub Actions or similar (2-4 core runners) - Pre-commit hooks: Configured to use
-j 8explicitly for faster commits - Systems with many cores (16+) benefit from the 8-job cap to avoid I/O contention
The make test target and pre-commit hooks take advantage of the auto-detected parallelism for optimal local performance.
Run one or more specific test suites:
cd tests
./run_tests.sh test_log_levels
./run_tests.sh test_initialization test_formatAvailable test suites:
test_ansi_injection- ANSI escape sanitization and related security teststest_concurrent_access- Concurrency and parallel logging behaviortest_config- Configuration file parsing and behaviortest_config_security- Security hardening for configuration inputtest_edge_cases- Boundary and unusual input handlingtest_environment_security- Environment-based security checkstest_error_conditions- Error handling and defensive behaviortest_format- Message format templates and formatting behaviortest_fuzzing- Fuzz-style robustness checkstest_initialization- Logger initialization behaviortest_install- Installation and setup scriptstest_journal_logging- Journal-specific behavior and forced journal loggingtest_junit_output- JUnit XML report generationtest_log_levels- Log level functionalitytest_mixed_sanitization_modes- Combined sanitization mode behaviortest_output- Output routing and stream behaviortest_path_traversal- Path traversal protectionstest_resource_limits- Resource and size limit behaviortest_runtime_config- Runtime configuration changestest_script_name_sanitization- Script and tag name sanitizationtest_sensitive_data- Sensitive data handling protectionstest_toctou_protection- TOCTOU/race-condition protectionstest_unsafe_newlines- Unsafe newline mode behavior
The test runner provides colored, hierarchical output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Bash Logger Test Suite
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Running test_log_levels...
✓ Log level constants are defined
✓ FATAL is alias for EMERGENCY
✓ get_log_level_value converts names to numbers
...
✓ test_log_levels: 12 passed
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Test Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Tests: 466
Passed: 461
Failed: 0
Skipped: 5
All tests passed!
Symbols:
- ✓ (green) - Test passed
- ✗ (red) - Test failed
- ⊘ (yellow) - Test skipped
Exit Codes:
0- All tests passed1- One or more tests failed
The project supports code coverage reporting and static analysis via SonarQube. These tools are available through Makefile targets but require additional setup as they depend on external tools and services.
Note: These features are primarily used by the maintainer for local analysis against a private SonarQube instance. They are documented here for completeness and for contributors who wish to run similar analysis.
The coverage and SonarQube targets require the following tools:
| Tool | Purpose | Installation |
|---|---|---|
| kcov | Code coverage for Bash scripts | sudo dnf install kcov (Fedora) or sudo apt install kcov (Debian/Ubuntu) |
| sonar-scanner | SonarQube CLI scanner | Download from SonarQube docs or use package manager |
| secret-tool | GNOME Keyring CLI for secure token storage | sudo dnf install libsecret (Fedora) or sudo apt install libsecret-tools (Debian/Ubuntu) |
Additionally, you need:
- Access to a SonarQube server - The project's
sonar-project.propertiesis configured for the maintainer's private instance. You'll need to modify this file for your own server. - SonarQube authentication token - Stored securely in the GNOME Keyring (see below).
kcov provides code coverage for Bash scripts by instrumenting the script execution and tracking which lines are executed.
# Run tests with coverage
make coverageThis executes:
kcov --include-path=./logging.sh coverage-report ./tests/run_tests.shOutput:
- Coverage report is generated in
coverage-report/ - HTML report:
coverage-report/run_tests.sh/index.html - SonarQube-compatible XML:
coverage-report/run_tests.sh/sonarqube.xml
The test runner can generate JUnit XML reports for CI systems and SonarQube:
# Run tests with JUnit XML output
make test-junitOr directly:
./tests/run_tests.sh --junitOutput:
- JUnit XML report:
test-reports/junit.xml - The report follows the SonarQube Generic Test Execution format
SonarQube provides static analysis, code quality metrics, and aggregates coverage data.
The project uses secret-tool (GNOME Keyring) to securely store the SonarQube authentication
token rather than embedding it in files or environment variables.
Store your token:
secret-tool store --label='SonarQube Token' service sonarqube account scannerYou'll be prompted to enter your token. This stores it securely in the GNOME Keyring.
Verify the token is stored:
secret-tool lookup service sonarqube account scannerThe project includes a sonar-project.properties file configured for the maintainer's
private SonarQube instance. To use your own server, update:
# Update the host URL (add this line if using a different server)
sonar.host.url=https://your-sonarqube-server.example.com
# Optionally update project key if needed
sonar.projectKey=bash-logger# Run SonarQube scanner
make sonarBefore running the scanner, this target automatically syncs the version number from
logging.sh (BASH_LOGGER_VERSION) to sonar-project.properties (sonar.projectVersion),
ensuring the version reported to SonarQube always matches the source code.
The scanner reads configuration from sonar-project.properties and uploads:
- Source code for analysis
- Coverage report from
coverage-report/run_tests.sh/sonarqube.xml - Test execution report from
test-reports/junit.xml
To run coverage, tests with JUnit output, and SonarQube scan in sequence:
make sonar-analysisThis runs the following targets in order:
coverage- Generates coverage report via kcovtest-junit- Generates JUnit XML test reportsonar- Uploads everything to SonarQube
Cleaning up reports:
make cleanThis removes coverage-report/ and test-reports/ directories along with other temporary files.
The test suite is organized into the following files:
Core Infrastructure (tests/):
run_tests.sh- Main test runnertest_helpers.sh- Assertion functions and utilities
Test Suites (tests/):
run_tests.sh discovers test suites automatically using test_*.sh
(excluding test_helpers.sh and test_example.sh).
Current runnable suite files include:
test_ansi_injection.shtest_concurrent_access.shtest_config.shtest_config_security.shtest_edge_cases.shtest_environment_security.shtest_error_conditions.shtest_format.shtest_fuzzing.shtest_initialization.shtest_install.shtest_journal_logging.shtest_junit_output.shtest_log_levels.shtest_mixed_sanitization_modes.shtest_output.shtest_path_traversal.shtest_resource_limits.shtest_runtime_config.shtest_script_name_sanitization.shtest_sensitive_data.shtest_toctou_protection.shtest_unsafe_newlines.sh
Current test coverage includes:
- Core logging behavior: ✅ Extensive Notes: levels, output streams, formatting, initialization.
- Configuration handling: ✅ Extensive Notes: parsing, validation, and security-focused config tests.
- Journal logging: ✅ Extensive Notes: initialization, runtime toggles, and forced per-call journal writes.
- Security hardening: ✅ Extensive Notes: sensitive data, path traversal, ANSI/newline handling, and TOCTOU.
- Reliability and robustness: ✅ Extensive Notes: concurrency, fuzzing, edge cases, and resource limits.
- CI/reporting integrations: ✅ Covered Notes: JUnit XML output and SonarQube-related flow.
Use ./tests/run_tests.sh to get the authoritative, current totals on your machine.
Every test follows this pattern:
test_feature_name() {
start_test "Human-readable test description"
# Setup
init_logger --options
local log_file="$TEST_DIR/test.log"
LOG_FILE="$log_file"
# Execute
log_info "Test message"
# Assert
assert_file_contains "$log_file" "Test message" || return
# Mark as passed
pass_test
}Key Points:
- Function name should start with
test_ - Always call
start_testwith a descriptive message - Use
$TEST_DIRfor temporary files - Return early on assertion failure with
|| return - Call
pass_testat the end
All assertion functions are defined in test_helpers.sh.
# Assert two values are equal
assert_equals "expected" "actual" "optional message"
# Assert two values are different
assert_not_equals "unexpected" "actual" "optional message"Example:
assert_equals "$LOG_LEVEL_INFO" "$CURRENT_LOG_LEVEL" || return# Assert string contains substring
assert_contains "haystack" "needle" "optional message"
# Assert string doesn't contain substring
assert_not_contains "haystack" "needle" "optional message"
# Assert string matches regex pattern
assert_matches "string" "pattern" "optional message"Example:
local output="[INFO] Test message"
assert_contains "$output" "[INFO]" || return
assert_matches "$output" "\[INFO\].*Test" || return# File existence
assert_file_exists "path/to/file" "optional message"
assert_file_not_exists "path/to/file" "optional message"
# File content
assert_file_contains "path/to/file" "text" "optional message"
assert_file_not_contains "path/to/file" "text" "optional message"
# File size
assert_file_empty "path/to/file" "optional message"
assert_file_not_empty "path/to/file" "optional message"Example:
local log_file="$TEST_DIR/test.log"
log_info "Test"
assert_file_exists "$log_file" || return
assert_file_contains "$log_file" "Test" || return# Assert command succeeds (exit code 0)
assert_success command arg1 arg2
# Assert command fails (non-zero exit code)
assert_failure command arg1 arg2Example:
assert_success check_logger_availableAdditional helper functions available:
# Capture combined stdout/stderr
capture_output OUTPUT_VAR command args
# Capture streams separately
capture_streams STDOUT_VAR STDERR_VAR command args
# Run command with logger sourced
run_with_logger "init_logger && log_info 'test'"
# Skip a test with reason
skip_test "logger command not available"Example:
if ! check_logger_available; then
skip_test "logger command not available"
return
fi-
Descriptive Names: Use clear, specific test names
# Good test_error_messages_go_to_stderr() # Bad test_stderr()
-
Isolated Tests: Each test should be independent
# Each test gets fresh logger state via setup_test test_feature_one() { start_test "..." init_logger --level INFO # Test specific to INFO level } test_feature_two() { start_test "..." init_logger --level DEBUG # Independent - doesn't affect other tests }
-
Use Temporary Files: Always use
$TEST_DIRfor test fileslocal log_file="$TEST_DIR/my_test.log" LOG_FILE="$log_file"
-
Clear Assertions: Add descriptive messages for complex assertions
assert_equals "$expected" "$actual" "Level should be INFO after init" || return
-
Test Edge Cases: Include boundary conditions and error cases
test_empty_message() test_very_long_message() test_invalid_log_level()
To add a completely new test suite:
- Create the test file:
tests/test_feature.sh
#!/usr/bin/env bash
#
# test_feature.sh - Tests for new feature
#
# Tests:
# - Brief description of what's tested
# Individual test functions
test_feature_works() {
start_test "Feature works as expected"
init_logger --quiet
local log_file="$TEST_DIR/test.log"
LOG_FILE="$log_file"
# Your test logic here
log_info "Test"
assert_file_contains "$log_file" "Test" || return
pass_test
}
test_feature_edge_case() {
start_test "Feature handles edge case"
# Your test logic
pass_test
}
# Call all test functions
test_feature_works
test_feature_edge_case- Make it executable:
chmod +x tests/test_feature.sh-
No test runner edit required:
tests/run_tests.shauto-discoverstest_*.shfiles intests/(excludingtest_helpers.shandtest_example.sh). -
Run your new tests:
cd tests
./run_tests.sh test_featureTo add tests to an existing suite:
- Add test function to the appropriate file
- Call the function at the bottom of the file
- Run the suite to verify
Example - adding to test_log_levels.sh:
# Add this function
test_custom_log_level() {
start_test "Custom log level works"
# Your test implementation
pass_test
}
# Add this call at the bottom
test_log_level_constants
test_fatal_alias
# ... existing calls ...
test_custom_log_level # Add your new testWhen a test fails:
-
Review the failure message - includes test name and reason:
✗ test_feature Reason: Expected 'value1' but got 'value2' -
Check test artifacts - failed tests preserve temporary directories:
Test artifacts in: /tmp/bash-logger-tests.XXXXXX/timestamp -
Run specific test for faster iteration:
./run_tests.sh test_specific_suite
-
Add debug output temporarily:
test_feature() { start_test "..." # Add debug output echo "DEBUG: variable=$variable" >&2 echo "DEBUG: log_file contents:" >&2 cat "$log_file" >&2 assert_equals "expected" "$variable" || return pass_test }
-
Check the logging module - source it interactively:
source logging.sh init_logger --level DEBUG log_info "Test"
The test suite is designed to work seamlessly in CI environments:
GitHub Actions Example:
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run tests
run: |
cd tests
./run_tests.shRequirements:
- Bash 4.0 or later
- Standard Unix utilities (cat, grep, wc, date, mkdir, touch)
- Optional:
loggercommand for journal logging tests - Optional:
bcfor accurate test timing in JUnit XML output (durations show as 0 if unavailable)
CI Characteristics:
- Non-interactive
- Clean exit codes (0=pass, 1=fail)
- Temporary files in system temp directory
- Skips tests when dependencies unavailable
- No pager usage or interactive prompts
- Writing Tests - Contributor guide: how to write and structure tests, with annotated examples and common pitfalls
- Getting Started - Basic usage of the logging module
- Initialization - Initialization options and configuration
- Examples - Comprehensive usage examples
- Troubleshooting - Common issues and solutions