Unified logging system for consistent, informative output across Python and Bash
Quick Reference: Error Handling | Testing | Troubleshooting
| Topic | Guide | Description |
|---|---|---|
| 🐍 Python | python-logging.md | Python logging system |
| 🐚 Bash | bash-logging.md | Shell script logging |
| 📋 Patterns | logging-patterns.md | Best practices, progress tracking |
-
Infrastructure Logging (
infrastructure.core.logging.logging_utils)- Logging utilities
- Environment-based configuration
- Context managers and decorators
- Progress tracking and resource monitoring
-
Project Logging (
projects/*/src/utils/logging.py)- Standardized interface for projects
- Simple, consistent API
- Graceful fallback when infrastructure unavailable
- Seamless integration with infrastructure logging
# Import the standardized logger
from utils.logging import get_logger
# Get a logger for your module
log = get_logger(__name__)
# Basic logging
log.info("Starting analysis")
log.success("Analysis completed!")
log.error("Something went wrong")
# Progress tracking
log.progress(50, 100, "Processing data")
log.stage(2, 5, "Data Analysis")
# Context managers
with log.operation("Running simulation"):
# Your simulation code
pass# Use the full infrastructure logging
from infrastructure.core.logging.logging_utils import get_logger, log_operation
log = get_logger(__name__)
# Advanced logging features
with log_operation("Complex operation", log):
# Operation code
pass
# Resource monitoring
log.resource_usage("After data processing")source scripts/bash_utils.sh
log_success "Operation completed"
log_info "General information"
log_warning "Warning message"
log_error "Error occurred"| Variable | Values | Default | Description |
|---|---|---|---|
LOG_LEVEL |
0,1,2,3 | 1 | 0=DEBUG, 1=INFO, 2=WARNING, 3=ERROR |
NO_EMOJI |
true/false | false | Disable emoji in output |
STRUCTURED_LOGGING |
true/false | false | Enable JSON structured logging |
# Debug mode (most verbose)
export LOG_LEVEL=0
# Info mode (default)
export LOG_LEVEL=1
# Warnings only
export LOG_LEVEL=2
# Errors only
export LOG_LEVEL=3log.debug("Detailed diagnostic information")
log.info("General information about execution")
log.warning("Warning about potential issues")
log.error("Error messages for failures")
log.critical("Critical system failures")# Success confirmation
log.success("Operation completed successfully")
# Section headers
log.header("=== ANALYSIS RESULTS ===")
# Progress indicators
log.progress(current=50, total=100, task="Processing data")
# Pipeline stages
log.stage(stage_num=2, total_stages=5, stage_name="Data Analysis")
# Sub-operations
log.substep("Loading dataset...")
log.substep("Running validation...")# Operation timing and status
with log.operation("Data preprocessing"):
# Code that gets timed and logged
preprocess_data()
# Simple timing only
with log.timing("Complex calculation"):
# Code that gets timed
complex_calculation()from infrastructure.core.logging.logging_utils import setup_project_logging
# Log to file in addition to console
log = setup_project_logging(__name__, log_file="analysis.log")
log.info("This goes to both console and file")# Log current system resource usage
log.resource_usage("After memory-intensive operation")Output includes CPU usage, memory usage, and system load.
export STRUCTURED_LOGGING=trueEnables JSON-formatted log output for log aggregation systems.
# ✅ GOOD: Use __name__ for proper hierarchy
log = get_logger(__name__)
# ❌ BAD: Hardcoded names
log = get_logger("my_script")# ✅ GOOD: Use appropriate levels
log.debug("Variable x = 42") # Debug: internal state
log.info("Processing 1000 files") # Info: normal operation
log.warning("File not found, using default") # Warning: recoverable issues
log.error("Failed to connect to database") # Error: operation failure
# ❌ BAD: Wrong levels
log.info("x = 42") # Too verbose for info
log.error("File not found") # Not an error if handled# ✅ GOOD: Use context managers for clear operations
with log.operation("Data analysis pipeline"):
load_data()
process_data()
save_results()
# ❌ BAD: Manual start/end logging
log.info("Starting data analysis")
load_data()
process_data()
save_results()
log.info("Completed data analysis")# ✅ GOOD: Include context in error messages
try:
process_file(filename)
except Exception as e:
log.error(f"Failed to process {filename}: {e}")
raiseclass ProjectLogger:
"""Standardized logging interface for projects."""
def __init__(self, name: str, level: Optional[int] = None)
def debug(self, message: str, *args, **kwargs) -> None
def info(self, message: str, *args, **kwargs) -> None
def warning(self, message: str, *args, **kwargs) -> None
def error(self, message: str, *args, **kwargs) -> None
def critical(self, message: str, *args, **kwargs) -> None
# Specialized methods
def success(self, message: str) -> None
def header(self, message: str) -> None
def progress(self, current: int, total: int, task: str = "") -> None
def stage(self, stage_num: int, total_stages: int, stage_name: str) -> None
def substep(self, message: str) -> None
# Context managers
def operation(self, operation: str, level: int = logging.INFO) -> ContextManager
def timing(self, label: str) -> ContextManager
# Resource monitoring
def resource_usage(self, stage_name: str = "") -> Nonedef get_logger(name: str, level: Optional[int] = None) -> ProjectLogger:
"""Get a standardized logger for projects."""
def get_project_logger(name: str, level: Optional[int] = None) -> ProjectLogger:
"""Alias for get_logger."""
def setup_project_logging(name: str, level: Optional[int] = None,
log_file: Optional[str] = None) -> ProjectLogger:
"""Set up project logging with optional file output."""Location: projects/{project_name}/output/logs/pipeline.log
View logs:
cat output/{project_name}/logs/pipeline.log
grep -i error output/{project_name}/logs/pipeline.logThe project logging system includes graceful fallback when infrastructure is unavailable:
# If infrastructure import fails, falls back to basic logging
from utils.logging import get_logger # Still works!
log = get_logger(__name__) # Uses basic Python logging
log.info("This still works") # Basic functionality preserved1. No output visible
# Check log level
export LOG_LEVEL=0 # Enable debug output
# Check if NO_EMOJI is set (can hide success messages)
unset NO_EMOJI2. Import errors
# For projects, ensure conftest.py adds infrastructure to path
# This is handled automatically in the template3. File logging not working
# Ensure directory exists and is writable
import os
os.makedirs(os.path.dirname(log_file), exist_ok=True)def test_logging_no_errors():
log = get_logger("test_module")
# These should not raise exceptions
log.info("Test message")
log.success("Success message")
log.progress(50, 100, "Test progress")
# Context managers should work
with log.operation("Test operation"):
pass- Error Handling Guide - Custom exception usage
- Testing Guide - Testing with logging
- API Reference - Full API documentation
- Infrastructure Logging - Infrastructure implementation