Skip to content

Commit b9d38ad

Browse files
scouzi1966claude
andauthored
Add -w/--webui switch for llama-cpp webui integration (#15)
* Add -w switch for llama-cpp webui integration - Add -w/--webui CLI flag to enable webui and open browser - Add llama.cpp as git submodule (sparse checkout for webui only) - Add /props endpoint for llama.cpp webui compatibility - Add webui serving with gzip decompression and CSS injection - Make 'model' field optional in chat completion requests - Add Makefile targets: submodules, webui, build-with-webui - Update build-portable.sh to include webui resources - Update .gitignore for webui build artifacts The webui uses OpenAI-compatible endpoints (/v1/chat/completions, /v1/models, /health) which are already implemented. CSS injection hides the attachment button since AFM doesn't support file uploads. Note: Most webui settings (penalties, top-k, etc.) have no effect as Apple Foundation Model only supports temperature parameter. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add Homebrew distribution support for webui - Add Homebrew-style paths to webui discovery (/usr/local/share/afm/webui/, /opt/homebrew/share/afm/webui/) - Update create-distribution.sh to include webui in tarball - Update portable install script to install webui to share directory This ensures the webui works when installed via Homebrew tap or portable distribution package. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add AFM branding and pin llama.cpp submodule version - Inject JavaScript to rebrand webui ("Apple Foundation Models" instead of "llama.cpp") - Change subtitle to "Type a message to get started" (removes upload reference) - Update Makefile to document pinned llama.cpp commit - Add submodule-status target to show pinned versions - Remove --recursive flag (not needed for webui-only sparse checkout) The llama.cpp submodule is pinned to commit 0e4ebeb05 for reproducible builds. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
1 parent 2d97ec3 commit b9d38ad

10 files changed

Lines changed: 402 additions & 29 deletions

File tree

.gitignore

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,9 @@ test.pdf
6262

6363
# LoRA adapter files for testing
6464
*.fmadapter
65-
test_lora.fmadapter
65+
test_lora.fmadapter
66+
67+
# WebUI build artifacts
68+
Resources/webui/
69+
vendor/llama.cpp/tools/server/webui/node_modules/
70+
vendor/llama.cpp/tools/server/public/

.gitmodules

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[submodule "vendor/llama.cpp"]
2+
path = vendor/llama.cpp
3+
url = https://github.com/ggml-org/llama.cpp.git

Makefile

Lines changed: 43 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# AFM - Apple Foundation Models API
22
# Makefile for building and distributing the portable CLI
33

4-
.PHONY: build clean install uninstall portable dist test help
4+
.PHONY: build clean install uninstall portable dist test help submodules submodule-status webui build-with-webui
55

66
# Default target
77
all: build
@@ -22,6 +22,34 @@ build:
2222
portable:
2323
@./build-portable.sh
2424

25+
# Initialize git submodules (pinned to specific commit for reproducibility)
26+
# NOTE: llama.cpp is pinned to a specific commit - do not use --remote flag
27+
submodules:
28+
@echo "📦 Initializing git submodules (pinned version)..."
29+
@git submodule update --init
30+
@echo "✅ Submodules initialized (llama.cpp @ $$(cd vendor/llama.cpp && git rev-parse --short HEAD))"
31+
32+
# Show pinned submodule versions
33+
submodule-status:
34+
@echo "📌 Pinned submodule versions:"
35+
@git submodule status
36+
37+
# Build the webui from llama.cpp
38+
webui: submodules
39+
@echo "🌐 Building webui..."
40+
@if [ ! -d "vendor/llama.cpp/tools/server/webui" ]; then \
41+
echo "❌ Error: webui source not found. Run 'make submodules' first."; \
42+
exit 1; \
43+
fi
44+
@cd vendor/llama.cpp/tools/server/webui && npm install && npm run build
45+
@mkdir -p Resources/webui
46+
@cp vendor/llama.cpp/tools/server/public/index.html.gz Resources/webui/
47+
@echo "✅ WebUI built: Resources/webui/index.html.gz"
48+
49+
# Build with webui included
50+
build-with-webui: webui build
51+
@echo "✅ Build with webui complete"
52+
2553
# Clean build artifacts
2654
clean:
2755
@echo "🧹 Cleaning build artifacts..."
@@ -73,19 +101,23 @@ help:
73101
@echo "=================================="
74102
@echo ""
75103
@echo "Available targets:"
76-
@echo " build - Build release binary (default, portable)"
77-
@echo " portable - Build with enhanced portability"
78-
@echo " clean - Clean build artifacts"
79-
@echo " install - Install to /usr/local/bin (requires sudo)"
80-
@echo " uninstall - Remove from /usr/local/bin"
81-
@echo " dist - Create distribution package"
82-
@echo " test - Test the binary and portability"
83-
@echo " debug - Build debug version"
84-
@echo " run - Build and run debug server"
85-
@echo " help - Show this help"
104+
@echo " build - Build release binary (default, portable)"
105+
@echo " portable - Build with enhanced portability"
106+
@echo " clean - Clean build artifacts"
107+
@echo " install - Install to /usr/local/bin (requires sudo)"
108+
@echo " uninstall - Remove from /usr/local/bin"
109+
@echo " dist - Create distribution package"
110+
@echo " test - Test the binary and portability"
111+
@echo " debug - Build debug version"
112+
@echo " run - Build and run debug server"
113+
@echo " submodules - Initialize git submodules"
114+
@echo " webui - Build webui from llama.cpp (requires Node.js)"
115+
@echo " build-with-webui - Build with webui included"
116+
@echo " help - Show this help"
86117
@echo ""
87118
@echo "Examples:"
88119
@echo " make build # Build portable executable"
120+
@echo " make build-with-webui # Build with webui support"
89121
@echo " make install # Build and install to system"
90122
@echo " make dist # Create distribution package"
91123
@echo " make test # Test binary works"

Sources/MacLocalAPI/Controllers/ChatCompletionsController.swift

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ struct ChatCompletionsController: RouteCollection {
6262
let completionTokens = estimateTokens(for: content)
6363

6464
let response = ChatCompletionResponse(
65-
model: chatRequest.model,
65+
model: chatRequest.model ?? "foundation",
6666
content: content,
6767
promptTokens: promptTokens,
6868
completionTokens: completionTokens
@@ -155,7 +155,7 @@ struct ChatCompletionsController: RouteCollection {
155155
try await streamContentSmoothly(
156156
content: content,
157157
streamId: streamId,
158-
model: chatRequest.model,
158+
model: chatRequest.model ?? "foundation",
159159
encoder: encoder,
160160
writer: writer,
161161
isFirst: &isFirst,
@@ -176,7 +176,7 @@ struct ChatCompletionsController: RouteCollection {
176176
// Send final chunk with metrics
177177
let finalChunk = ChatCompletionStreamResponse(
178178
id: streamId,
179-
model: chatRequest.model,
179+
model: chatRequest.model ?? "foundation",
180180
content: "",
181181
isFinished: true,
182182
usage: usage

Sources/MacLocalAPI/Models/OpenAIRequest.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ import Vapor
22
import Foundation
33

44
struct ChatCompletionRequest: Content {
5-
let model: String
5+
let model: String?
66
let messages: [Message]
77
let temperature: Double?
88
let maxTokens: Int?

0 commit comments

Comments
 (0)