-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Labels
UI/UXRelated to designRelated to designelectronRelated to electron.jsRelated to electron.jsenhancementNew feature or requestNew feature or requestllmRelated to aiRelated to aipriority: highFor important issues that affect many users or major functionality of the projectFor important issues that affect many users or major functionality of the project
Description
This feature adds opt-in memory so the browser can remember prompts/responses across P2P apps and improve continuity.
Short note on local LLMs in PeerSky: see docs https://github.com/p2plabsxyz/peersky-browser/blob/feat/local-llm/docs/LLM.md
Summary
- Persist all LLM prompts/responses to a single
llm.json. - Reusable History UI component for P2P apps (Editor, AI Chat).
- Memory toggle in Settings (opt-in). Clearing P2P data wipes
llm.json.
Implementation
llm.json storage
- Location: app data dir (same base as tabs.json/lastOpened.json).
- Append-only, bounded by max entries (e.g., 2,000) and max size (e.g., 20MB).
Schema
- Store per-entry with timestamp, appId, sessionId, role, content, model, tokens (if available).
- We can use TOON? https://github.com/toon-format/toon
Example:
{
"version": 1,
"entries": [
{
"ts": "2025-11-18T00:00:00.000Z",
"appId": "p2p-editor",
"sessionId": "f4c2…",
"role": "user",
"content": "Create a page with 🕸️…",
"model": "qwen2.5-coder:3b",
"meta": { "tokens": { "input": 42, "output": 0 } }
},
{
"ts": "2025-11-18T00:00:02.000Z",
"appId": "p2p-editor",
"sessionId": "f4c2…",
"role": "assistant",
"content": "<html>…</html>",
"model": "qwen2.5-coder:3b",
"meta": { "tokens": { "input": 0, "output": 310 } }
}
]
}Integrations
Unified preload bridge (src/pages/unified-preload.js):
- Provide a small
window.llmMemoryAPI:
add(entry),list({ appId, limit }),clear()respecting the settings toggle.
P2P Editor (src/pages/p2p/editor/ai-generator.js):
- On each generation step (HTML/JS/CSS), append both prompt and resulting artifact.
- Use a shared
saveLLMHistory({ appId, role, content, model, sessionId }).
AI Chat (src/pages/p2p/ai-chat/index.html):
- On submit: save user message.
- While streaming: buffer to assistant content and save on completion.
Reusable History component
- New web component:
src/pages/p2p/components/llm-history.js- Simple list with filters: by
appId, search incontent, limit N. - Emits
onSelect(entry)for restore/edit.
- Simple list with filters: by
- Reuse in Editor and AI Chat apps.
Settings (opt-in toggle)
- Enable LLM Memory (store prompts & responses locally).
-
src/settings-manager.js:- Persist
settings.llm.memoryEnabled: boolean (defaultfalse). - Gate all writes/reads based on this flag.
- Persist
Clear history
- Update
resetP2PDatainsrc/settings-manager.jsto also removellm.json(and recreate empty on next use).
Docs
- Update
LLM.mdwith the newwindow.llmMemoryAPI with examples.
Privacy
- Memory is local-only, never shared unless user publishes it.
Limitations
- Large files and binary outputs are not stored; only text fields.
cc @RangerMauve
Metadata
Metadata
Assignees
Labels
UI/UXRelated to designRelated to designelectronRelated to electron.jsRelated to electron.jsenhancementNew feature or requestNew feature or requestllmRelated to aiRelated to aipriority: highFor important issues that affect many users or major functionality of the projectFor important issues that affect many users or major functionality of the project