Skip to content

WOODSEE-DIGI/SwiftMaestro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Swift Maestro

Swift Maestro is a native macOS SwiftUI rebuild of Maestro focused on local-first AI agent orchestration.

Phase 1 replaces the Electron/Node stack with a single macOS app that can:

  • create named agents
  • connect each agent to an OpenAI-compatible local LLM endpoint
  • stream assistant replies into the chat UI
  • persist agent histories on disk
  • store API tokens in the macOS Keychain instead of plaintext config files

Current status

Phase 1 is implemented and verified.

Verified on 2026-04-15:

  • the app builds successfully with xcodebuild
  • the app launches successfully as a native macOS application
  • LM Studio authentication works with a Bearer token stored in Keychain
  • both saved local agents (Mistral and Gemma4light) completed end-to-end chat round trips
  • the real app adapter path was verified through:
    • PersistenceService
    • ProviderFactoryService
    • KeychainService
    • LocalLLMExecutor
    • LocalLLMAgentAdapter
  • live UI automation confirmed the visible app can:
    • select agents in the sidebar
    • type into the message field
    • send messages
    • persist assistant replies

Architecture

The app is intentionally simple in Phase 1:

SwiftMaestro/
├── project.yml
├── Sources/
│   ├── App/
│   ├── Models/
│   ├── Protocols/
│   ├── Services/
│   ├── Adapters/
│   ├── ViewModels/
│   └── Views/
└── Resources/

Key layers:

  • Models/Agent, Message, LocalLLMConfig
  • Protocols/MaestroAgentProtocol
  • Services/ — persistence, Keychain access, adapter factory
  • Adapters/ — OpenAI-compatible local LLM transport and streaming logic
  • ViewModels/ — app, chat, and wizard state
  • Views/ — SwiftUI interface for sidebar, chat, settings, and agent creation

Streaming implementation

Streaming uses the native Swift concurrency stack:

  • URLSession.shared.bytes(for:)
  • SSE line parsing via asyncBytes.lines
  • chunk decoding from data: ... payloads
  • AsyncThrowingStream<String, Error> for UI consumption

This avoids the incorrect pseudocode pattern of URLSession.shared.uploadStream, which does not exist.

Local-first security model

Swift Maestro is local-first by default:

  • no analytics
  • no crash reporting
  • no cloud dependency unless the user explicitly configures one

Sensitive data handling:

  • endpoint URL and model ID are stored in:
    • ~/Library/Application Support/SwiftMaestro/configs.json
  • agent histories are stored in:
    • ~/Library/Application Support/SwiftMaestro/agents.json
  • API tokens are stored in the macOS Keychain under:
    • service: com.woodseedigi.SwiftMaestro
    • account: apikey.<config-id>

Building

Requirements:

  • Xcode
  • Swift 6 toolchain available locally
  • xcodegen

Generate the Xcode project:

xcodegen generate

Build:

xcodebuild -project SwiftMaestro.xcodeproj -scheme SwiftMaestro -destination "platform=macOS" -configuration Debug build

Launch:

open ~/Library/Developer/Xcode/DerivedData/SwiftMaestro-*/Build/Products/Debug/SwiftMaestro.app

LM Studio setup

Swift Maestro expects an OpenAI-compatible endpoint such as LM Studio.

Typical settings:

  • Endpoint URL: http://<host>:1234
  • Model ID: the exact model identifier returned by /v1/models
  • Requires API Key: enabled if LM Studio authentication is enabled

Notes:

  • the API token must be stored as plain token text when importing it manually
  • an earlier verification issue was caused by a token file that actually contained raw RTF markup instead of just the token value

Known current limitations

Phase 1 is intentionally narrow:

  • only the local LLM provider path is active
  • Claude Code is visible as a future provider but is not implemented yet
  • settings currently operate on the first/default saved config rather than a fully explicit active-config model
  • multiple saved configs can exist simultaneously; this works, but config management should be made more deterministic in a later cleanup pass
  • connection test messaging is still minimal and should expose clearer server error detail

Verification notes

The following real issues were discovered and resolved during verification:

  • multiple saved configs were pointing at different Keychain entries
  • one agent was authenticated while another was not
  • the imported token file was not plain text and had to be normalized

After normalizing saved configs and Keychain entries, both agents were verified successfully.

Next development steps

The next work should focus on stabilizing the current Phase 1 foundation before expanding the feature set.

Immediate priorities

  1. Config model cleanup

    • introduce an explicit active/default config concept
    • stop relying on the first config in the saved array
    • add a migration path for existing duplicated configs
  2. Settings and error handling

    • show the actual server error body when connection tests fail
    • make it clearer which saved config is being edited
    • improve token import/edit flows so malformed token files are easier to detect
  3. Chat UX hardening

    • clear stale draft text when switching agents
    • make message submission more deterministic across focus changes
    • improve display of streamed failures so raw response dumps do not dominate the conversation view
  4. Persistence cleanup

    • deduplicate saved configs
    • consider separating transient UI draft state from persisted agent history
    • add a lightweight reset/recovery path for broken local config state

Phase 2 target

Once the current local LLM flow is cleaned up, Phase 2 should expand the app without changing its local-first philosophy:

  • git-aware workspace features
  • multi-tab agent sessions
  • better project/workspace context management
  • more robust settings management for multiple local endpoints

Phase 3 target

After the Phase 1/2 foundation is stable:

  • Claude Code adapter
  • Codex adapter
  • broader orchestration features
  • richer usage and diagnostics UI, remaining local-only unless explicitly enabled

Suggested implementation order

The recommended order for the next development session is:

  1. fix active config selection
  2. add config deduplication/migration
  3. improve Settings error reporting
  4. clean chat draft/send behavior
  5. expand into multi-session and git-aware features

About

Native macOS SwiftUI rebuild of Maestro focused on local-first AI agent orchestration.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors