Skip to content

Security: olaflaitinen/medical_imaging_fairness

Security

SECURITY.md

Security Policy

Overview

This project focuses on fairness and explainability in medical imaging AI. Given the sensitive nature of healthcare applications, we take security seriously and follow best practices for responsible AI development.


Supported Versions

We release security updates for the following versions:

Version Supported Status
1.0.x Active
< 1.0 End of Life

Data Privacy and Protection

Synthetic Data Only

Important: This repository uses synthetic data only. No real patient data is included.

  • All medical images are artificially generated
  • Demographic attributes are simulated
  • No personally identifiable information (PII) is present

Guidelines for Real Data Usage

If you adapt this code for real medical data:

  1. Obtain Ethical Approval

    • IRB/Ethics Committee approval required
    • Patient consent for data usage
    • HIPAA compliance (US) or GDPR compliance (EU)
  2. Data Protection Measures

    • Encrypt data at rest and in transit
    • Use secure storage (not public repositories)
    • Implement access controls
    • Anonymize/de-identify patient data
    • Remove DICOM metadata containing PII
  3. Never Commit Real Data to Git

    # Add to .gitignore
    data/real/
    *.dcm
    *.nii
    *patient*.csv
    *phi*.json
  4. Use Environment Variables for Credentials

    # Never hardcode credentials
    DATABASE_URL=${DATABASE_URL}
    API_KEY=${API_KEY}

Reporting a Vulnerability

We take all security vulnerabilities seriously. If you discover a security issue, please follow responsible disclosure practices.

How to Report

DO NOT create a public GitHub issue for security vulnerabilities.

Instead:

  1. Email: Send details to olyulaim@dtu.dk
  2. Subject Line: "Security Vulnerability - Medical Imaging Fairness"
  3. Include:
    • Description of the vulnerability
    • Steps to reproduce
    • Potential impact
    • Suggested fix (if available)
    • Your contact information (optional for credit)

What to Expect

  • Acknowledgment: Within 48 hours
  • Initial Assessment: Within 1 week
  • Status Updates: Every 2 weeks
  • Resolution Timeline: Depends on severity
    • Critical: 24-48 hours
    • High: 1 week
    • Medium: 2-4 weeks
    • Low: Best effort

Disclosure Policy

  • We will coordinate with you on public disclosure timing
  • Credit will be given to reporters (unless anonymity requested)
  • CVE assignment for significant vulnerabilities
  • Security advisory published on GitHub

Security Best Practices

For Users

Running in Production

# Use specific version tags, not 'latest'
docker pull medical-imaging-fairness:1.0.0

# Run with limited privileges
docker run --user 1000:1000 medical-imaging-fairness:1.0.0

# Mount volumes as read-only when possible
docker run -v $(pwd)/data:/data:ro medical-imaging-fairness:1.0.0

API Keys and Secrets

# Use environment variables
import os
api_key = os.getenv("API_KEY")

# Use secrets management tools
# - AWS Secrets Manager
# - Azure Key Vault
# - HashiCorp Vault

Dependency Management

# Regularly update dependencies
pip install --upgrade pip
pip install --upgrade -r requirements.txt

# Check for known vulnerabilities
pip install safety
safety check

# Audit npm packages (if using Node.js tools)
npm audit

For Contributors

Code Review Checklist

  • No hardcoded credentials or API keys
  • No PII or PHI in test data
  • Input validation for user-provided data
  • Proper error handling (no stack traces in production)
  • Dependencies from trusted sources only
  • Security implications documented

Secure Coding Practices

# Input validation
def load_image(image_path: str):
    # Validate file extension
    allowed_extensions = {'.png', '.jpg', '.jpeg'}
    if not any(image_path.endswith(ext) for ext in allowed_extensions):
        raise ValueError(f"Invalid file extension")

    # Validate file size
    max_size = 10 * 1024 * 1024  # 10MB
    if os.path.getsize(image_path) > max_size:
        raise ValueError("File too large")

    # Validate image content
    try:
        img = Image.open(image_path)
        img.verify()
    except Exception as e:
        raise ValueError(f"Invalid image: {e}")

Known Security Considerations

Model Fairness and Bias

Risk: Biased models can perpetuate healthcare disparities.

Mitigation:

  • Comprehensive fairness evaluation across demographic groups
  • Regular audits of model performance
  • Transparency in limitations and failure modes
  • User warnings about deployment contexts

Adversarial Attacks

Risk: Medical AI models can be vulnerable to adversarial perturbations.

Mitigation:

  • This is a research codebase (not production-ready)
  • Adversarial robustness testing recommended before deployment
  • Consider adversarial training for production models
  • Implement input validation and anomaly detection

Model Interpretability

Risk: Black-box models may make incorrect decisions without explanation.

Mitigation:

  • Multiple explainability methods provided (SHAP, GradCAM)
  • Faithfulness scoring to validate explanations
  • Concept Bottleneck Model for inherent interpretability
  • Documentation of model limitations

Data Poisoning

Risk: Training data can be manipulated to introduce biases.

Mitigation:

  • Use trusted data sources
  • Validate data integrity (checksums, hashes)
  • Monitor training metrics for anomalies
  • Version control for datasets

Compliance Considerations

Healthcare Regulations

If deploying in clinical settings, ensure compliance with:

  • HIPAA (United States)

    • Privacy Rule: Protect PHI
    • Security Rule: Safeguard electronic PHI
    • Breach Notification Rule: Report breaches
  • GDPR (European Union)

    • Right to explanation for automated decisions
    • Data minimization principles
    • Consent requirements
  • FDA Regulations (United States)

    • Medical device classification
    • Pre-market approval (if applicable)
    • Post-market surveillance

AI Ethics Guidelines

Follow established frameworks:

  • WHO Guidelines on AI for Health (2021)
  • EU AI Act (High-Risk AI Systems)
  • IEEE Ethically Aligned Design
  • ACM Code of Ethics

Dependency Security

Current Dependencies

We regularly monitor dependencies for known vulnerabilities:

# Check Python dependencies
pip install safety
safety check -r requirements.txt

# Check for outdated packages
pip list --outdated

Critical Dependencies

Package Current Version Security Status
torch 2.0.1 Security
torchvision 0.15.2 Security
numpy 1.24.3 Security
Pillow 10.0.0 Security

Update Schedule

  • Security patches: Immediately
  • Minor updates: Monthly
  • Major updates: Quarterly (with testing)

Incident Response Plan

In Case of Security Breach

  1. Immediate Actions

    • Isolate affected systems
    • Notify maintainers via email
    • Document incident timeline
  2. Assessment

    • Determine scope of breach
    • Identify affected data/systems
    • Assess potential impact
  3. Containment

    • Apply emergency patches
    • Revoke compromised credentials
    • Update access controls
  4. Recovery

    • Restore from clean backups
    • Verify system integrity
    • Monitor for recurrence
  5. Post-Incident

    • Root cause analysis
    • Update security measures
    • Notify affected users
    • Public disclosure (if warranted)

Security Resources

External Tools

  • Bandit: Python security linter

    pip install bandit
    bandit -r .
  • Safety: Dependency vulnerability scanner

    pip install safety
    safety check
  • Trivy: Docker image vulnerability scanner

    trivy image medical-imaging-fairness:1.0.0

Documentation


Contact

Security Team: olyulaim@dtu.dk

For general questions, use GitHub Issues.

For security vulnerabilities, always use private email.


Acknowledgments

We thank the security research community for helping keep this project secure.

Hall of Fame (Security Researchers):

  • Your name could be here!

Last Updated: 6 November 2025 Version: 1.0.0

There aren’t any published security advisories