Thank you for your interest in contributing to the otel-lgtm-proxy! This document provides guidelines and information for contributors.
- Code of Conduct
- Getting Started
- Development Setup
- Project Structure
- Development Workflow
- Testing
- Code Style
- Submitting Changes
- Release Process
This project adheres to a code of conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
- Go 1.24+: This project uses Go 1.24 features
- Git: For version control
- Docker: For running the development stack (optional)
- golangci-lint: For code linting
- mockgen: For generating test mocks
-
Fork and Clone
git clone https://github.com/YOUR_USERNAME/otel-lgtm-proxy.git cd otel-lgtm-proxy -
Install Dependencies
go mod download
-
Install Development Tools
# Install golangci-lint go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest # Install mockgen go install go.uber.org/mock/mockgen@latest
-
Verify Setup
go test ./... go build ./cmd
├── cmd/ # Application entry points
│ └── main.go # Main application
├── internal/ # Private application code
│ ├── config/ # Configuration management
│ │ └── config.go
│ ├── certutil/ # TLS certificate utilities
│ │ ├── cert_helpers.go
│ │ └── cert_helpers_test.go
│ ├── logger/ # Logging utilities
│ │ ├── logger.go
│ │ └── logger_test.go
│ ├── otel/ # OpenTelemetry setup
│ │ ├── otel.go
│ │ └── otel_test.go
│ ├── logs/ # Log telemetry processing
│ │ ├── logs.go
│ │ ├── logs_test.go
│ │ └── logs_mock.go
│ ├── metrics/ # Metric telemetry processing
│ │ ├── metrics.go
│ │ ├── metrics_test.go
│ │ └── metrics_mock.go
│ └── traces/ # Trace telemetry processing
│ ├── traces.go
│ ├── traces_test.go
│ └── traces_mock.go
├── test/ # Testing tools and configurations
│ ├── docker-compose.yml # LGTM stack for development
│ ├── *.yaml # Service configurations
│ └── send-*.sh # Testing scripts
├── docker-compose.yml # LGTM development stack
├── Dockerfile # Container build
├── go.mod # Go module definition
├── go.sum # Go module checksums
└── README.md # Project documentation
cmd/: Contains application entry points. Keep these minimal.internal/config/: Configuration parsing and validation.internal/certutil/: TLS configuration and certificate management.internal/logger/: OpenTelemetry logging wrapper with severity filtering.internal/otel/: OpenTelemetry provider initialization and configuration.internal/logs/: Log telemetry processing with tenant partitioning and forwarding.internal/metrics/: Metric telemetry processing with temporality handling.internal/traces/: Trace telemetry processing with correlation support.test/: Testing scripts and development environment configurations.
main: Production-ready codefeature/description: New featuresbugfix/description: Bug fixesdocs/description: Documentation updates
-
Create Feature Branch
git checkout -b feature/add-grpc-support
-
Make Changes
- Write code following project conventions
- Add tests for new functionality
- Update documentation as needed
-
Test Changes
go test ./... go test -race ./... go test -cover ./...
-
Lint Code
golangci-lint run
-
Commit Changes
git add . git commit -m "feat: add gRPC endpoint support"
-
Push and Create PR
git push origin feature/add-grpc-support
Use conventional commits:
feat:New featuresfix:Bug fixesdocs:Documentation changestest:Test changesrefactor:Code refactoringperf:Performance improvementschore:Maintenance tasks
Examples:
feat: add support for gRPC endpoints
fix: correct tenant header forwarding logic
docs: update configuration documentation
test: add integration tests for service layer
- Unit Tests: Test individual functions and methods
- Integration Tests: Test component interactions
- Mock Usage: Use mocks for external dependencies
- Test File Naming:
*_test.goin the same package - Test Function Naming:
TestFunctionNameorTestType_Method - Mock File Naming:
*_mock.gogenerated by mockgen
func TestConfig_Parse(t *testing.T) {
tests := []struct {
name string
env map[string]string
want *Config
wantErr bool
}{
{
name: "valid configuration",
env: map[string]string{
"OTEL_SERVICE_NAME": "test-service",
},
want: &Config{
OTEL: OTELConfig{
ServiceName: "test-service",
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Test implementation
})
}
}# Run all tests
go test ./...
# Run tests with verbose output
go test -v ./...
# Run tests with coverage
go test -cover ./...
# Run tests with race detection
go test -race ./...
# Run specific package tests
go test ./internal/config
# Run specific test
go test -run TestConfig_Parse ./internal/configWhen adding new interfaces, generate mocks:
# Generate mocks for logs interfaces
mockgen -source=internal/logs/logs.go -destination=internal/logs/logs_mock.go -package=logs
# Generate mocks for metrics interfaces
mockgen -source=internal/metrics/metrics.go -destination=internal/metrics/metrics_mock.go -package=metrics
# Generate mocks for traces interfaces
mockgen -source=internal/traces/traces.go -destination=internal/traces/traces_mock.go -package=tracesMaintain test coverage above 80%:
go test -cover ./...- Follow standard Go conventions
- Use
gofmtfor formatting - Follow Effective Go
- Use meaningful variable and function names
- Write documentation for exported functions
Use golangci-lint with the project configuration:
golangci-lint run// Good: Wrap errors with context
if err != nil {
return fmt.Errorf("failed to parse config: %w", err)
}
// Good: Handle errors at appropriate level
data, err := repository.GetData(ctx, id)
if err != nil {
logger.Error("Failed to get data", "id", id, "error", err)
return nil, err
}
// Good: Check HTTP status codes and return errors for failures
if resp.StatusCode >= http.StatusBadRequest {
logger.Error(
ctx,
logger,
fmt.Sprintf("received non-success status code: %d", resp.StatusCode),
log.String("status_code", strconv.Itoa(resp.StatusCode)),
)
return fmt.Errorf("received non-success status code: %d", resp.StatusCode)
}HTTP Error Handling:
- Treat all HTTP status codes >= 400 as errors
- Log errors with relevant context (tenant, signal type, status code)
- Record errors in distributed tracing spans
- Return errors to enable retry logic in upstream systems
- Always record metrics even for failed requests
Use structured logging:
// Good: Structured logging
logger.Info("Processing request",
"tenant", tenantID,
"signal_type", "traces",
"payload_size", len(data))
// Avoid: Formatted strings
logger.Info(fmt.Sprintf("Processing %s for tenant %s", signalType, tenantID))- Fork the Repository
- Create Feature Branch
- Make Changes following guidelines
- Write/Update Tests
- Update Documentation
- Submit Pull Request
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation update
- [ ] Refactoring
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing completed
## Checklist
- [ ] Code follows project style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests added/updated- All PRs require review from maintainers
- Address feedback promptly
- Keep PRs focused and atomic
- Rebase before merging
This project ONLY supports HTTP protobuf payloads:
- ✅ OTLP/HTTP with protobuf encoding
- ❌ OTLP/gRPC
- ❌ JSON encoding
- ❌ Other serialization formats
When adding support for new OpenTelemetry signals:
- Create a new package in
internal/(e.g.,internal/newsignal/) - Implement partitioning logic similar to existing signal packages
- Add HTTP handlers following the same pattern
- Update configuration in
internal/config/if needed - Add comprehensive tests with generated mocks
- Update documentation
- Minimize memory allocations in hot paths
- Use context for cancellation and timeouts
- Profile before optimizing
- Benchmark critical paths
- Validate all inputs
- Use TLS for production deployments
- Follow secure coding practices
- Report security issues privately
- Update README.md for user-facing changes
- Add godoc comments for exported functions
- Update configuration documentation
- Include examples for new features
- Check existing issues and discussions
- Create detailed issue reports
- Join community discussions
- Contact maintainers for sensitive issues
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT License).