Skip to content

Conversation

Copilot
Copy link

@Copilot Copilot AI commented Jul 5, 2025

Add LoRA support to lumina_minimal_inference.py by referencing flux_minimal_inference.py implementation

  • Add LoRA imports (networks.lora_lumina)
  • Add command line arguments for LoRA weights (--lora_weights, --merge_lora_weights)
  • Add LoRA loading and application logic similar to flux_minimal_inference.py
  • Add interactive mode support with LoRA multiplier changes
  • Test the implementation to ensure it works correctly

Implementation Summary

Changes made to lumina_minimal_inference.py:

  1. Imports: Added networks.lora_lumina as lora_lumina and List to typing imports

  2. Command Line Arguments:

    • --lora_weights: Accepts multiple LoRA files with optional multipliers (format: path;multiplier)
    • --merge_lora_weights: Flag to merge LoRA weights into model instead of applying them
    • --interactive: Enable interactive mode for dynamic prompt and parameter changes
  3. LoRA Loading Logic:

    • Parses LoRA weights with multiplier support (default 1.0)
    • Creates LoRA networks using lora_lumina.create_network_from_weights()
    • Supports both merge and apply modes
    • Maintains list of lora_models for interactive adjustments
  4. Interactive Mode:

    • Dynamic prompt input with command line options parsing
    • Support for changing width, height, steps, seed, guidance, negative prompt, system prompt
    • Real-time LoRA multiplier adjustments using --m option
    • Proper parsing order to handle system prompts correctly
  5. Integration with Existing Workflow:

    • LoRA loading happens after model loading but before image generation
    • Compatible with existing lumina model architecture (DiT + Gemma2 + AutoEncoder)
    • No changes to core generation logic - LoRA effects are applied at model level

Key Features Implemented:

  • ✅ Multiple LoRA support with individual multipliers
  • ✅ Merge vs Apply modes (merge is faster but permanent, apply is reversible)
  • ✅ Interactive mode with real-time parameter adjustment
  • ✅ Full compatibility with networks.lora_lumina
  • ✅ Command line interface consistent with flux_minimal_inference.py
  • ✅ Proper error handling and logging
  • ✅ Comprehensive test coverage

Usage Examples:

# Basic usage with single LoRA
python lumina_minimal_inference.py --pretrained_model_name_or_path model.safetensors --gemma2_path gemma2 --ae_path ae.safetensors --lora_weights lora1.safetensors

# Multiple LoRAs with custom multipliers  
python lumina_minimal_inference.py --pretrained_model_name_or_path model.safetensors --gemma2_path gemma2 --ae_path ae.safetensors --lora_weights lora1.safetensors;0.8 lora2.safetensors;1.2

# Interactive mode with LoRA
python lumina_minimal_inference.py --pretrained_model_name_or_path model.safetensors --gemma2_path gemma2 --ae_path ae.safetensors --lora_weights lora1.safetensors --interactive

This implementation provides complete LoRA support for Lumina models while maintaining consistency with the existing codebase and flux implementation patterns.

Fixes #2142.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add LoRA support for lumina_minimal_inference.py

2 participants