Skip to content

feat: LLM memory window.llmMemory #97

@akhileshthite

Description

@akhileshthite

This feature adds opt-in memory so the browser can remember prompts/responses across P2P apps and improve continuity.

Short note on local LLMs in PeerSky: see docs https://github.com/p2plabsxyz/peersky-browser/blob/feat/local-llm/docs/LLM.md

Summary

  • Persist all LLM prompts/responses to a single llm.json.
  • Reusable History UI component for P2P apps (Editor, AI Chat).
  • Memory toggle in Settings (opt-in). Clearing P2P data wipes llm.json.

Implementation

llm.json storage

  • Location: app data dir (same base as tabs.json/lastOpened.json).
  • Append-only, bounded by max entries (e.g., 2,000) and max size (e.g., 20MB).

Schema

{
  "version": 1,
  "entries": [
    {
      "ts": "2025-11-18T00:00:00.000Z",
      "appId": "p2p-editor",
      "sessionId": "f4c2…",
      "role": "user",
      "content": "Create a page with 🕸️…",
      "model": "qwen2.5-coder:3b",
      "meta": { "tokens": { "input": 42, "output": 0 } }
    },
    {
      "ts": "2025-11-18T00:00:02.000Z",
      "appId": "p2p-editor",
      "sessionId": "f4c2…",
      "role": "assistant",
      "content": "<html>…</html>",
      "model": "qwen2.5-coder:3b",
      "meta": { "tokens": { "input": 0, "output": 310 } }
    }
  ]
}

Integrations

Unified preload bridge (src/pages/unified-preload.js):

  • Provide a small window.llmMemory API:
    add(entry), list({ appId, limit }), clear() respecting the settings toggle.

P2P Editor (src/pages/p2p/editor/ai-generator.js):

  • On each generation step (HTML/JS/CSS), append both prompt and resulting artifact.
  • Use a shared saveLLMHistory({ appId, role, content, model, sessionId }).

AI Chat (src/pages/p2p/ai-chat/index.html):

  • On submit: save user message.
  • While streaming: buffer to assistant content and save on completion.

Reusable History component

  • New web component: src/pages/p2p/components/llm-history.js
    • Simple list with filters: by appId, search in content, limit N.
    • Emits onSelect(entry) for restore/edit.
  • Reuse in Editor and AI Chat apps.

Settings (opt-in toggle)

  • Enable LLM Memory (store prompts & responses locally).
  • src/settings-manager.js:
    • Persist settings.llm.memoryEnabled: boolean (default false).
    • Gate all writes/reads based on this flag.

Clear history

  • Update resetP2PData in src/settings-manager.js to also remove llm.json (and recreate empty on next use).

Docs

  • Update LLM.md with the new window.llmMemory API with examples.

Privacy

  • Memory is local-only, never shared unless user publishes it.

Limitations

  • Large files and binary outputs are not stored; only text fields.

cc @RangerMauve

Metadata

Metadata

Assignees

Labels

UI/UXRelated to designelectronRelated to electron.jsenhancementNew feature or requestllmRelated to aipriority: highFor important issues that affect many users or major functionality of the project

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions