This project focuses on fairness and explainability in medical imaging AI. Given the sensitive nature of healthcare applications, we take security seriously and follow best practices for responsible AI development.
We release security updates for the following versions:
| Version | Supported | Status |
|---|---|---|
| 1.0.x | ✅ | Active |
| < 1.0 | ❌ | End of Life |
Important: This repository uses synthetic data only. No real patient data is included.
- All medical images are artificially generated
- Demographic attributes are simulated
- No personally identifiable information (PII) is present
If you adapt this code for real medical data:
-
Obtain Ethical Approval
- IRB/Ethics Committee approval required
- Patient consent for data usage
- HIPAA compliance (US) or GDPR compliance (EU)
-
Data Protection Measures
- Encrypt data at rest and in transit
- Use secure storage (not public repositories)
- Implement access controls
- Anonymize/de-identify patient data
- Remove DICOM metadata containing PII
-
Never Commit Real Data to Git
# Add to .gitignore data/real/ *.dcm *.nii *patient*.csv *phi*.json
-
Use Environment Variables for Credentials
# Never hardcode credentials DATABASE_URL=${DATABASE_URL} API_KEY=${API_KEY}
We take all security vulnerabilities seriously. If you discover a security issue, please follow responsible disclosure practices.
DO NOT create a public GitHub issue for security vulnerabilities.
Instead:
- Email: Send details to olyulaim@dtu.dk
- Subject Line: "Security Vulnerability - Medical Imaging Fairness"
- Include:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if available)
- Your contact information (optional for credit)
- Acknowledgment: Within 48 hours
- Initial Assessment: Within 1 week
- Status Updates: Every 2 weeks
- Resolution Timeline: Depends on severity
- Critical: 24-48 hours
- High: 1 week
- Medium: 2-4 weeks
- Low: Best effort
- We will coordinate with you on public disclosure timing
- Credit will be given to reporters (unless anonymity requested)
- CVE assignment for significant vulnerabilities
- Security advisory published on GitHub
# Use specific version tags, not 'latest'
docker pull medical-imaging-fairness:1.0.0
# Run with limited privileges
docker run --user 1000:1000 medical-imaging-fairness:1.0.0
# Mount volumes as read-only when possible
docker run -v $(pwd)/data:/data:ro medical-imaging-fairness:1.0.0# Use environment variables
import os
api_key = os.getenv("API_KEY")
# Use secrets management tools
# - AWS Secrets Manager
# - Azure Key Vault
# - HashiCorp Vault# Regularly update dependencies
pip install --upgrade pip
pip install --upgrade -r requirements.txt
# Check for known vulnerabilities
pip install safety
safety check
# Audit npm packages (if using Node.js tools)
npm audit- No hardcoded credentials or API keys
- No PII or PHI in test data
- Input validation for user-provided data
- Proper error handling (no stack traces in production)
- Dependencies from trusted sources only
- Security implications documented
# Input validation
def load_image(image_path: str):
# Validate file extension
allowed_extensions = {'.png', '.jpg', '.jpeg'}
if not any(image_path.endswith(ext) for ext in allowed_extensions):
raise ValueError(f"Invalid file extension")
# Validate file size
max_size = 10 * 1024 * 1024 # 10MB
if os.path.getsize(image_path) > max_size:
raise ValueError("File too large")
# Validate image content
try:
img = Image.open(image_path)
img.verify()
except Exception as e:
raise ValueError(f"Invalid image: {e}")Risk: Biased models can perpetuate healthcare disparities.
Mitigation:
- Comprehensive fairness evaluation across demographic groups
- Regular audits of model performance
- Transparency in limitations and failure modes
- User warnings about deployment contexts
Risk: Medical AI models can be vulnerable to adversarial perturbations.
Mitigation:
- This is a research codebase (not production-ready)
- Adversarial robustness testing recommended before deployment
- Consider adversarial training for production models
- Implement input validation and anomaly detection
Risk: Black-box models may make incorrect decisions without explanation.
Mitigation:
- Multiple explainability methods provided (SHAP, GradCAM)
- Faithfulness scoring to validate explanations
- Concept Bottleneck Model for inherent interpretability
- Documentation of model limitations
Risk: Training data can be manipulated to introduce biases.
Mitigation:
- Use trusted data sources
- Validate data integrity (checksums, hashes)
- Monitor training metrics for anomalies
- Version control for datasets
If deploying in clinical settings, ensure compliance with:
-
HIPAA (United States)
- Privacy Rule: Protect PHI
- Security Rule: Safeguard electronic PHI
- Breach Notification Rule: Report breaches
-
GDPR (European Union)
- Right to explanation for automated decisions
- Data minimization principles
- Consent requirements
-
FDA Regulations (United States)
- Medical device classification
- Pre-market approval (if applicable)
- Post-market surveillance
Follow established frameworks:
- WHO Guidelines on AI for Health (2021)
- EU AI Act (High-Risk AI Systems)
- IEEE Ethically Aligned Design
- ACM Code of Ethics
We regularly monitor dependencies for known vulnerabilities:
# Check Python dependencies
pip install safety
safety check -r requirements.txt
# Check for outdated packages
pip list --outdated| Package | Current Version | Security Status |
|---|---|---|
| torch | 2.0.1 | |
| torchvision | 0.15.2 | |
| numpy | 1.24.3 | |
| Pillow | 10.0.0 |
- Security patches: Immediately
- Minor updates: Monthly
- Major updates: Quarterly (with testing)
-
Immediate Actions
- Isolate affected systems
- Notify maintainers via email
- Document incident timeline
-
Assessment
- Determine scope of breach
- Identify affected data/systems
- Assess potential impact
-
Containment
- Apply emergency patches
- Revoke compromised credentials
- Update access controls
-
Recovery
- Restore from clean backups
- Verify system integrity
- Monitor for recurrence
-
Post-Incident
- Root cause analysis
- Update security measures
- Notify affected users
- Public disclosure (if warranted)
-
Bandit: Python security linter
pip install bandit bandit -r . -
Safety: Dependency vulnerability scanner
pip install safety safety check
-
Trivy: Docker image vulnerability scanner
trivy image medical-imaging-fairness:1.0.0
Security Team: olyulaim@dtu.dk
For general questions, use GitHub Issues.
For security vulnerabilities, always use private email.
We thank the security research community for helping keep this project secure.
Hall of Fame (Security Researchers):
- Your name could be here!
Last Updated: 6 November 2025 Version: 1.0.0