Swift Maestro is a native macOS SwiftUI rebuild of Maestro focused on local-first AI agent orchestration.
Phase 1 replaces the Electron/Node stack with a single macOS app that can:
- create named agents
- connect each agent to an OpenAI-compatible local LLM endpoint
- stream assistant replies into the chat UI
- persist agent histories on disk
- store API tokens in the macOS Keychain instead of plaintext config files
Phase 1 is implemented and verified.
Verified on 2026-04-15:
- the app builds successfully with
xcodebuild - the app launches successfully as a native macOS application
- LM Studio authentication works with a Bearer token stored in Keychain
- both saved local agents (
MistralandGemma4light) completed end-to-end chat round trips - the real app adapter path was verified through:
PersistenceServiceProviderFactoryServiceKeychainServiceLocalLLMExecutorLocalLLMAgentAdapter
- live UI automation confirmed the visible app can:
- select agents in the sidebar
- type into the message field
- send messages
- persist assistant replies
The app is intentionally simple in Phase 1:
SwiftMaestro/
├── project.yml
├── Sources/
│ ├── App/
│ ├── Models/
│ ├── Protocols/
│ ├── Services/
│ ├── Adapters/
│ ├── ViewModels/
│ └── Views/
└── Resources/
Key layers:
Models/—Agent,Message,LocalLLMConfigProtocols/—MaestroAgentProtocolServices/— persistence, Keychain access, adapter factoryAdapters/— OpenAI-compatible local LLM transport and streaming logicViewModels/— app, chat, and wizard stateViews/— SwiftUI interface for sidebar, chat, settings, and agent creation
Streaming uses the native Swift concurrency stack:
URLSession.shared.bytes(for:)- SSE line parsing via
asyncBytes.lines - chunk decoding from
data: ...payloads AsyncThrowingStream<String, Error>for UI consumption
This avoids the incorrect pseudocode pattern of URLSession.shared.uploadStream, which does not exist.
Swift Maestro is local-first by default:
- no analytics
- no crash reporting
- no cloud dependency unless the user explicitly configures one
Sensitive data handling:
- endpoint URL and model ID are stored in:
~/Library/Application Support/SwiftMaestro/configs.json
- agent histories are stored in:
~/Library/Application Support/SwiftMaestro/agents.json
- API tokens are stored in the macOS Keychain under:
- service:
com.woodseedigi.SwiftMaestro - account:
apikey.<config-id>
- service:
Requirements:
- Xcode
- Swift 6 toolchain available locally
xcodegen
Generate the Xcode project:
xcodegen generateBuild:
xcodebuild -project SwiftMaestro.xcodeproj -scheme SwiftMaestro -destination "platform=macOS" -configuration Debug buildLaunch:
open ~/Library/Developer/Xcode/DerivedData/SwiftMaestro-*/Build/Products/Debug/SwiftMaestro.appSwift Maestro expects an OpenAI-compatible endpoint such as LM Studio.
Typical settings:
- Endpoint URL:
http://<host>:1234 - Model ID: the exact model identifier returned by
/v1/models - Requires API Key: enabled if LM Studio authentication is enabled
Notes:
- the API token must be stored as plain token text when importing it manually
- an earlier verification issue was caused by a token file that actually contained raw RTF markup instead of just the token value
Phase 1 is intentionally narrow:
- only the local LLM provider path is active
Claude Codeis visible as a future provider but is not implemented yet- settings currently operate on the first/default saved config rather than a fully explicit active-config model
- multiple saved configs can exist simultaneously; this works, but config management should be made more deterministic in a later cleanup pass
- connection test messaging is still minimal and should expose clearer server error detail
The following real issues were discovered and resolved during verification:
- multiple saved configs were pointing at different Keychain entries
- one agent was authenticated while another was not
- the imported token file was not plain text and had to be normalized
After normalizing saved configs and Keychain entries, both agents were verified successfully.
The next work should focus on stabilizing the current Phase 1 foundation before expanding the feature set.
-
Config model cleanup
- introduce an explicit active/default config concept
- stop relying on the first config in the saved array
- add a migration path for existing duplicated configs
-
Settings and error handling
- show the actual server error body when connection tests fail
- make it clearer which saved config is being edited
- improve token import/edit flows so malformed token files are easier to detect
-
Chat UX hardening
- clear stale draft text when switching agents
- make message submission more deterministic across focus changes
- improve display of streamed failures so raw response dumps do not dominate the conversation view
-
Persistence cleanup
- deduplicate saved configs
- consider separating transient UI draft state from persisted agent history
- add a lightweight reset/recovery path for broken local config state
Once the current local LLM flow is cleaned up, Phase 2 should expand the app without changing its local-first philosophy:
- git-aware workspace features
- multi-tab agent sessions
- better project/workspace context management
- more robust settings management for multiple local endpoints
After the Phase 1/2 foundation is stable:
Claude CodeadapterCodexadapter- broader orchestration features
- richer usage and diagnostics UI, remaining local-only unless explicitly enabled
The recommended order for the next development session is:
- fix active config selection
- add config deduplication/migration
- improve Settings error reporting
- clean chat draft/send behavior
- expand into multi-session and git-aware features