Releases: TaewoooPark/NODEPROMPT
NODEPROMPT 26.04.24
Update Catalog
26.04.24 update
Closing the loop
NodePrompt has always described itself as a co-decomposition loop: AI proposes a concept graph, the user reshapes it spatially, AI resynthesizes. The first two arcs of that loop have been visible from the start — the extraction passes and the spatial edit surface. But the third arc, the step that turns the edited graph back into a prompt, has been a black box. The user pressed Generate, the answer streamed in, and the translation from "I deleted three nodes and doubled the weight on this one" to "this is the sentence the model eventually wrote" was invisible.
This release opens that box. Every fragment of the synthesized prompt now carries a pointer back to the nodes and edges that produced it, and every sentence of the streamed answer can be clicked to see which concepts fed it. The loop is closed: graph edits, synthesized prompt, and generated response are now three aligned surfaces, and the user can hop between them.
What this is for
The richest cases are the ones where the final answer is dense and the user needs to ask "why did it say this?":
- Audit a paragraph. Click any sentence in the streamed response. The scene fades out every node that is not mentioned in that sentence, and the nodes that are mentioned are pinned as the active focus. You can now see, at a glance, which branches of your concept graph that paragraph was leaning on.
- Audit a synthesized fragment. Open the View Synthesized Prompt panel. Every concept line, every cross-branch relationship, and every excluded perspective is individually clickable. Clicking a line shows, in the scene, the exact node or edge it came from — including the edges you drew by hand.
- Audit a deletion. Nodes you have deleted still appear at the bottom of the synthesized prompt as excluded perspectives (they are part of the prompt because the model is told what not to reiterate). Clicking those entries reveals, visually, which concepts you took out.
- Compare two framings. Click one sentence of the answer, read which nodes were invoked, click a second sentence, and the scene transitions — the old highlight fades, the new one rises. The cross-fade itself is the comparison.
How it is used
Both the response view and the synthesized view use the same click-to-pin model. Click a fragment — sentence in the response panel, or segment in the synthesized panel — and it is pinned: the scene highlights stay up, the cursor is free to leave, you can scroll or edit elsewhere without losing the selection. Click the same fragment again, or click the empty background of the panel, to release. Clicking a different fragment transitions to it.
In the synthesized view:
- Hierarchy lines carry a solid underline. The thickness of the underline is scaled by the node's weight, so a heavier concept prints a visibly heavier rule in the prompt. Changing a weight in the edit panel is now visible twice — once on the graph, once in the synthesized text.
- Cross-branch relationships carry a dashed underline, in the Novak sense: these are the non-tree edges the user drew or the validate pass surfaced, and they are the places insight tends to come from.
- Excluded perspectives are struck through at half opacity, a standing visual record of what the user decided not to carry into the answer.
- Instructions carry a short left rule to mark them as system-side directives rather than content.
In the response view, the highlighting is simpler — every sentence is a click target, and sentences that mention no graph concepts are inert (clicking them does nothing rather than clearing your current selection, which means you can keep a pin active while scanning).
Highlight transitions dip through zero
Earlier the scene highlight swapped instantly when the focus changed — the old connected set vanished, the new one appeared. That snap was barely tolerable for click-once editing, but with this release the user is expected to click frequently, moving the focus between sentences and segments to compare framings. A snap at every click would be visually noisy.
The highlight state machine now transitions by dipping through zero. When the focus changes from A to B, the fade target drops to zero first; the old highlight dissolves back into the uniform scene; then, once the dip threshold is crossed, the new connected set is committed and the fade rises back to one. The whole transition takes roughly 420 ms and costs nothing at the consumer level — every scene component already consumed the single fadeProgress value, so the dip happens automatically in the sphere view, the radial view, the interior view, and the edge renderer with no local changes.
Cross-fade per node (where old and new highlights coexist for a frame) was the other option. It was rejected because Lombardi aesthetics are about clear separations, not overlapping states; a dip feels closer in spirit to the rest of the UI.
What this does not change
The text of the synthesized prompt is byte-for-byte identical to what it used to be. The new segment renderer flattens to the same string before it is sent to the provider, which means the feature adds no tokens, no extra API cost, and no change in response quality. This is purely a presentation layer on top of the existing graph-to-prompt serialization.
Keyboard shortcuts, provider selection, extraction sliders, the six transcendental types, the gesture control, and every 3D interaction continue to work exactly as before. If the user never opens the synthesized view and never clicks an answer sentence, nothing about their workflow has changed.
Technical notes
synthesizePrompt(originalPrompt, nodes, edges)now delegates to a newsynthesizePromptSegmentsthat emits aSynthesisSegment[]. Each segment carries{ text, kind, provenance: { nodeIds, edgeIds, weight, deletedMark } }. The legacy string-returning function is aflattenSegmentswrapper, so existing callers — the generate path, the demo loader, anything that needed the plain string — are untouched.- Newlines live between segments, never inside them. This keeps hover and click targets visually clean: a segment is always a single printable line, so the click zone matches the visual rectangle.
- Provenance is written into a new store field
hoveredProvenance: { nodeIds, edgeIds, kind: 'text' | 'scene' }. Despite the name, both hover (in earlier iterations) and click (in the shipped form) write through this channel. Thekindflag is reserved for a future scene-to-text direction. highlightState.getHighlightState()now composes its focus fromhoveredProvenancewhen present, falling back toselectedNodeId. Its cache key was widened from a single focus id to a composite signature covering provenance node and edge lists, and a pending state was added to hold the next target during a dip.- The response tokenizer splits on
.,!,?,。, and newlines, preserving break tokens so thewhiteSpace: pre-wraplayout is undisturbed. Label matching inside a sentence prefers longer labels first and deduplicates ids, so short labels that are substrings of long ones do not double-count. - No new dependencies. No changes to any provider adapter. No backend changes. No public API changes. The synthesizer change and the store additions are source-compatible with the multimodal release shipped earlier this week.
Notes
- Clicking a sentence with no recognized concept is intentionally a no-op: transitional prose ("In summary,", "On the other hand,") should not clear a pin the user is reading against. Clicking the empty box background, however, does clear — that is the explicit exit.
- The pin state is local to the panel and resets automatically on three events: starting a new generation, toggling between response and synthesized view, and re-extracting the graph (which invalidates segment indices). None of these require a manual clear.
- Label matching is substring-based, which is permissive — it will match "creativity" against a label "creative" if both are present. This is the right trade-off for audit use: over-highlighting a related concept is more useful than missing it. A stricter position-aware matcher can replace this helper later without touching the scene or the store.
- Excluded nodes currently pin only the node; a future iteration can surface a faint ghost at the node's last position on the sphere, a literal visualisation of "what you took out."
NODEPROMPT 26.04.21
Update Catalog
26.04.21 update
The prompt is no longer text-only
NodePrompt's starting premise has always been that thinking is non-linear and language is linear — the sphere exists to close that gap. But until this release, the entrance to that sphere was still a single textarea. You had to take whatever you were thinking about — a whiteboard photograph, a figure from a paper, a UI mockup, an architecture diagram — and flatten it into prose before the system could see it. Half the structure was already gone before the first extraction pass ran.
This release removes that bottleneck. Images and PDFs can now be attached directly to a prompt. The extraction pipeline reads them alongside the text, and every node the graph produces can be grounded in what the attachment actually shows, not in a verbal approximation of it.
What this is for
The richest cases are ones where the shape of the thing is the thing:
- A research paper PDF — upload the whole document, type "decompose the argument," and the scaffold / fill / validate passes surface premises, method, claims, and counter-considerations as verum nodes, with res nodes tracking the formal structure. You no longer have to pre-summarize.
- A whiteboard or notebook photograph — a messy picture of arrows and boxes becomes a clean concept graph. The visual relations become edges; the handwritten labels become node labels; the spatial clustering becomes hierarchy.
- A UI mockup or design screenshot — drop a Figma export or a screenshot, ask for the conceptual decomposition, and the six transcendental registers sort the design surface into ens (what is on screen), unum (the situation and audience), bonum (the tone it is trying to strike), and so on.
- A technical diagram — architecture diagrams, flowcharts, and data-model sketches can be attached as is and read as structure, not re-described as prose.
- A chart or plot — attach a figure without its surrounding paper, ask what it shows, and the graph surfaces the quantities, their relationships, and the implied claims.
The text field still exists, and in most of these cases it is still useful — attachments supply the content, text supplies the angle you want it read from. An attached paper read with the prompt "what are the methodological commitments here?" decomposes very differently from the same paper read with "what would this imply for practice?" The attachment is the what; the prompt is the how.
How it is used
The prompt panel now has a dropzone below the textarea. Files can be added three ways — drag-and-drop onto the panel, click the dropzone to open a file picker, or paste. Attached files show as chips with filename and (for images) a thumbnail; each chip has an × to remove it. Submission proceeds as before — text alone, attachments alone, or both together are all valid. A submit with attachments but empty text uses a default instruction that the attached files be decomposed directly.
Size limits are 5 MB per image and 10 MB per PDF. Accepted image types are JPEG, PNG, WebP, and GIF; PDFs must be application/pdf. Files that exceed the limit or use an unsupported type are rejected at the UI layer with a specific error message, before any network call.
Provider capability matrix
Not every provider accepts every modality. The matrix is enforced at both the UI (unsupported file types cannot be attached) and the provider adapter (a safety check throws UnsupportedAttachmentError if something slips through):
| Provider | Image | |
|---|---|---|
| Anthropic Claude | yes | yes |
| Google Gemini | yes | yes |
| OpenAI GPT | yes | no |
| xAI Grok | yes | no |
| DeepSeek | no | no |
| Alibaba Qwen | no | no |
If the active provider does not support a modality, the dropzone shows a muted placeholder naming the provider, and the file input filters its accept attribute accordingly. Switching providers from the toolbar rewires the dropzone immediately. When only PDFs are supported, an attached PDF travels through the Anthropic document block or the Gemini inline_data part; when only images are supported, they travel through OpenAI's image_url block or the equivalent on each OpenAI-compatible provider.
DeepSeek and Qwen currently expose only text-capable default models through NodePrompt, so they refuse attachments entirely. This is not a permanent limitation — when a vision-capable Qwen or DeepSeek model is wired in, the capability flag flips and the existing adapter code handles it without further changes.
Extraction reads attachments natively
All three phases of the extraction pipeline — scaffold, fill, validate — accept the same attachment array. The model sees the text and the files together in a single multimodal message. Phase 1 (scaffold) proposes themes that can reference the attachment. Phase 2 (fill) names and describes the nodes against what the attachment actually shows. Phase 3 (validate) is the one where figure-to-text grounding matters most — cross-branch edges often surface because the model sees a relation drawn in the image that no single text-only branch of the scaffold would have named.
Descriptions generated by the Auto button in the edit panel can also be grounded in the original attachment, since the attachment is passed back to the simple-call path alongside the node label.
Technical notes
Attachmentis a tagged union (image | pdf) carryingmimeTypeanddataBase64. Files are read viaFileReader.readAsDataURLand thedata:prefix is stripped before the base64 payload is handed to a provider.- Each provider's user-message builder is responsible for translating the common
Attachmentshape into the provider's native block format: Anthropicimage/documentblocks, Geminiinline_dataparts, OpenAIimage_urlblocks. PDFs through the Chat Completions path are explicitly unsupported — OpenAI's file-upload flow is a separate API and is not wired in. - No backend changes are required. Attachments travel over the same Vite dev proxy as text prompts.
- Image previews use
URL.createObjectURLand are revoked when the chip is removed, so no blob references leak across submissions. - Keys, provider selection, extraction sliders, and every existing keyboard shortcut are untouched.
Notes
- The existing "Auto" description button, the streaming response path, and the graph edit layer all continue to work in text-only mode exactly as before. Nothing about this release changes behavior for a user who never attaches a file.
- If you keep multiple provider keys, check the capability matrix above before assuming a file will be accepted — the UI will tell you, but knowing the matrix up front avoids the switch.
- The capability flags live in
src/services/llm/catalog.tsunder each provider'ssupports: { image, pdf }. Adding support when a new vision-capable model is released is a one-line change plus a provider adapter update for the block format.
NODEPROMPT 26.04.20
Update Catalog
26.04.20 update
Interior mode becomes a first-class view
Interior mode — the view you get by zooming the camera past the sphere's surface — existed, but it was effectively read-only. Nodes looked almost the same size regardless of weight, labels barely rendered, and clicking a node did not behave the way it does from the outside. This release brings interior mode into parity with spherical mode for every interaction that matters, and adds a path from inside straight to the 2D layout.
Legibility fixes
The hyperbolic fish-eye scaling is kept, but its influence was too strong — a weight-0.1 node and a weight-1.0 node ended up nearly the same pixel size once the position-based distortion piled on top. The scaling is now tempered (0.55 + 0.45 * hyperbolic), so weight remains the dominant factor and the five-fold weight range is visible again.
Labels used to be limited to the ten nearest nodes within a hard distance threshold, which left most of the interior wordless. That LOD cap has been removed. Every visible node renders a live label, positioned above the node's actual rendered scale and kept in sync each frame — the same approach sphere mode uses.
Interaction parity with sphere mode
From inside the sphere, you can now:
- Hover a node to see the cursor become a pointer and the node scale up 1.3×.
- Click a node to select it; click it again to deselect — the same toggle as sphere mode, not the old one-way assignment.
- Focus a selected node and watch its connected neighbors brighten while unconnected nodes fade (the
highlightStatepath that already powered sphere mode is now applied inside too). - Right-click is unchanged: it still flies the camera toward the clicked node.
Double-click from inside takes you to 2D
Previously the Sphere ↔ Radial double-click worked only from the outside; interior mode ignored it. Now a double-click from inside the sphere triggers a direct morphToRadial — the node positions GSAP-interpolate from wherever they are (on the sphere surface you were viewing from the inside) to the 2D radial layout, the camera retreats from its interior position out to the radial viewing distance, and the wireframe fades out, all on the same 1.2 s power3.inOut timeline as the sphere-origin morph.
The one rendering subtlety: during the transition itself, the scene switches from InteriorView to the sphere-mode instanced renderer, because the fish-eye scaling would distort nodes mid-flight as they cross the sphere boundary. The positions themselves are continuous — you see one smooth motion from inside-view to plane, not a cut.
Technical notes
useInteriorTransition's hysteresis is already guarded byisTransitioning, so the automatic camera-distance-based sphere↔interior swap does not fight the morph. After the morph completes, mode isradialand the auto-transition is inert in that mode.- No new public API.
triggerMorphinSceneInnernow routesinteriorthroughmorphToRadialalongsidesphere, andScene.tsxrendersSphereViewwheneverisTransitioningis true regardless of the current mode. - No data changes. This is a rendering and interaction pass only; the graph model and extraction pipeline are untouched.
NODEPROMPT 26.04.15
NODEPROMPT Update Catalog
26.04.15 update
The six node types are now the six transcendentals
NodePrompt's six node types used to be pragmatic labels — concept / nuance / mood / philosophy / abstraction / context — picked to cover what a prompt contained. They worked, but they were a flat list: six buckets sitting side by side, with no reason why these six and not some other five or seven. This release replaces them with a set that has a reason: the six transcendentia Thomas Aquinas lays out in De Veritate q.1 a.1 — ens, res, unum, aliquid, verum, bonum.
The visual language stays exactly the same. The six patterns (solid, crosshatch, vertical, horizontal, diagonal, dots) are untouched; only what they mean has shifted. What was a taxonomy is now a metaphysics.
Why the transcendentals
Aquinas is not cataloguing kinds of being. He is asking: what can be said of any being as being, before we divide it into categories? His answer is that every being, simply in so far as it is, can be read through six convertible registers:
| Latin — UI | Meaning | The question it asks |
|---|---|---|
| ens — Being | id quod est — what is posited as being | What does this prompt posit as existing? |
| res — Essence | quod habet quidditatem — what has a whatness | What is it, as a formal structure? |
| unum — Unity | ens indivisum — being as undivided in itself | What holds it together as one? |
| aliquid — Difference | aliud-quid — other-than-other | What distinguishes it from what it is not? |
| verum — Truth | ens ut cognoscibile — being as knowable to intellect | How is it true to a knower? |
| bonum — Value | ens ut appetibile — being as desirable to will | How is it desirable to a will? |
These six are not six kinds of being but six aspects of the same being — convertibilia cum ente, convertible with being itself. To call something ens and to call it unum is to name the same thing from two different angles: once as existing, once as held together. The six angles are exhaustive in the scholastic sense — if a prompt has any content at all, it can be read through every one of them.
What this changes for a prompt
The old types asked, roughly, what kind of thing is this node? Concept, nuance, mood. The new types ask through which register is this node being read? The same sentence — "the model should be cautious about extrapolation" — can legitimately surface as:
- an ens node (the referent: the model, extrapolation),
- a res node (the structure: caution as a policy constraint),
- a verum node (the epistemic claim: extrapolation is unreliable),
- or a bonum node (the affective posture: carefulness as a value).
None of those readings is wrong. They are six lenses, and the graph becomes richer when a prompt is read through more than one of them. The extraction prompts have been rewritten so the model is asked, explicitly, not to collapse everything into ens — the old "concept" bucket was a strong attractor, and the new prompts name that as an anti-pattern.
First-draft mapping from the old types
For continuity, the closest first-draft correspondence between the retired labels and the transcendentals is:
| Old (retired) | New | Why this mapping |
|---|---|---|
| concept | ens | Both name the referent — what the prompt posits as existing. |
| abstraction | res | Formal structure, mechanism, quidditas — the "what it is" of a thing. |
| context | unum | The unifying frame that holds the situation together as one. |
| nuance | aliquid | Difference, subtext — aliud-quid, the "other-than" that distinguishes. |
| philosophy | verum | Worldviews, epistemic commitments — the register of being as knowable. |
| mood | bonum | Affective charge, values — being as desirable to the will. |
This is a first-draft mapping, not an identity. The point of the new scheme is precisely that a single sentence can pass through several registers at once, which the old one-type-per-node framing discouraged.
Pattern continuity
No visual assets were replaced. The canvas patterns drawn for each type — solid black for ens, crosshatch for res, vertical stripes for unum, horizontal stripes for aliquid, diagonals for verum, dots for bonum — keep their shapes; only their keys were renamed. An existing Lombardi-style graph looks identical after the upgrade.
The pattern-to-meaning alignment was chosen to be semantically honest rather than arbitrary:
- ens gets solid black — the densest, most foundational fill, matching id quod est.
- res gets crosshatch — formal structure of quidditas, two axes of definition intersecting.
- unum gets vertical stripes — ens indivisum, the thing held as one column.
- aliquid gets horizontal stripes — discrete lines marking difference and division.
- verum gets diagonals — rays of truth, the classical image for intellectual light.
- bonum gets scattered dots — the lightest, most dispersed pattern, ens ut appetibile.
Help overlay
The ? overlay has been extended with a full transcendentia section below the existing two-column reference. It renders the Latin name, the Korean / English UI label, the meaning, the question each type asks, and the first-draft mapping from the retired labels — the same table shown above, as live UI, so a user does not need to open a README to learn the six registers.
Extraction and response synthesis
The scaffold / fill / validate passes and the four-pass hierarchical extraction have all had their dimension definitions rewritten. The model now sees, for each of the six types: the Latin name and its scholastic definition, the question the register asks, a good example and a bad example, and an explicit instruction that ens is not a default fallback. Few-shot examples and anti-patterns were updated in lock-step so the model is not silently trained toward the old label set.
The response-synthesis path was updated too. Edited graphs are now serialized with the transcendental type tag alongside label / weight / abstraction level, and the generation system prompt teaches the answering model how to honor each register: verum nodes shape the argumentative premise, bonum nodes shape tone and affect, aliquid nodes surface what must be qualified, unum nodes define scope, res nodes drive definitional content, and ens nodes are the referents the answer is about. The register is instruction for the answering model, not content for the reader — the final answer never surfaces raw type tags.
No data migration
NodePrompt has never persisted graphs across sessions — every prompt is extracted fresh — so there is no stored data to migrate. A user upgrading their build opens NodePrompt and immediately sees the new registers in the type selector, the help overlay, and the editing panels. No keys to re-enter, no storage to clear.
Notes
- The six pattern textures are unchanged; only the keys that index them were renamed. Existing screenshots and documentation images remain visually accurate.
- Schemas (
NODE_TYPESenum, Zod validators, tool-use JSON Schemas) have been updated across all six LLM adapters introduced in 26.04.14 (Anthropic, OpenAI, Gemini, xAI, DeepSeek, Qwen). Structured output enforcement works the same way; only the enum values changed. - The change is intentionally not backwards-compatible at the type level. The old labels are gone from the code, not aliased. If you were reading
type === 'concept'in a local fork, rewrite it astype === 'ens'.
NODEPROMPT 26.04.14
NODEPROMPT Update Catalog
26.04.14 update
Multi-provider LLM support
NodePrompt now runs on six LLM back ends instead of one. You can pick the active provider from the toolbar and every feature — scaffold, fill, validate, streaming synthesis, and auto-generated node descriptions — routes through the chosen provider without any code path changes.
Supported providers and default models (snapshot as of 26.04.14):
| Provider | Fast role | Flagship role |
|---|---|---|
| Anthropic | Claude Haiku 4.5 | Claude Sonnet 4.6 |
| OpenAI | GPT-5.4 Mini | GPT-5.4 |
| Gemini 2.5 Flash | Gemini 3.1 Pro | |
| xAI | Grok 4.1 Fast | Grok 4.1 Fast Reasoning |
| DeepSeek | DeepSeek V3.2 Chat | DeepSeek Reasoner |
| Alibaba | Qwen3.5 Flash | Qwen3 Max |
Fast models handle structured extraction (scaffold / fill / validate);
flagship models handle the long-form synthesis stream. This split is automatic — no manual model picking required.
Unified provider abstraction
A new src/services/llm/ layer defines a single interface (checkConnection, callStructured, callSimple, stream) that every provider implements. claude.ts keeps all orchestration logic (four-pass hierarchical extraction, retry ladders, budget trimming, reference integrity) and only delegates the network layer to the active provider, so adding more providers in the future requires nothing but a new adapter file.
- Anthropic adapter keeps the native
tool_choicepath so existing structured output works without translation. - OpenAI-compatible factory covers OpenAI, Grok, DeepSeek, and Qwen from a single file; each is just a different base URL and auth header. Structured output is enforced via
response_format: json_schemaand streaming uses the standard SSE delta format. - Gemini adapter translates the tool input schema into Gemini’s
responseSchemadialect (uppercasedtype,nullable: trueinstead of union types, noadditionalProperties) and parsesstreamGenerateContentSSE for the synthesis panel.
Key storage and migration
- Per-provider keys are stored in a single
nodeprompt_api_keysobject
inlocalStoragealongside the active-provider selector. - The previous
nodeprompt_api_keyentry is migrated once on first
load into theanthropicslot, so existing users keep working with
zero action required. - Environment variables (
VITE_ANTHROPIC_API_KEY,VITE_OPENAI_API_KEY,
VITE_GEMINI_API_KEY,VITE_XAI_API_KEY,VITE_DEEPSEEK_API_KEY,
VITE_QWEN_API_KEY) are honoured as fall-backs for development.
Toolbar UI
- New inline provider dropdown with six minimal monotone SVG logos rendered in
currentColorso they blend into the existing Lombardi black-and-white aesthetic. Each menu row shows the logo, the short provider name, and a filled / empty marker indicating whether a key is stored for that provider. - The connection indicator and key-input panel now reflect the active provider. The panel title, placeholder, and key hint update to the selected provider (e.g.
sk-ant-…,AIza…,xai-…).
Model access notes
Some flagship models need extra steps beyond a standard API key. When the user selects one of the affected providers from the dropdown, a small bilingual (Korean / English) note pops up next to the selector. Providers with no caveats show nothing:
- OpenAI:
gpt-5.4requires Verified Organization status on the OpenAI dashboard. Unverified keys can still reachgpt-5.4-mini. - Google:
gemini-3.1-prorequires a paid-tier key with billing enabled. Free AI Studio keys are limited togemini-2.5-flash. - Alibaba:
qwen3-maxrequires per-model activation in the DashScope console before it can be called.qwen3.5-flashis enabled by default.
Vite dev proxy
The dev server now proxies all six provider endpoints so the browser can reach them without CORS issues:
/api/anthropic → api.anthropic.com
/api/openai → api.openai.com
/api/gemini → generativelanguage.googleapis.com
/api/xai → api.x.ai
/api/deepseek → api.deepseek.com
/api/qwen → dashscope-intl.aliyuncs.com
Production deployments that expose the browser directly should still front these with a server-side proxy, since API keys live in the browser.
Notes
- All structured-output schemas are shared across providers: the existing
input_schemaJSON Schemas used by Anthropic tool calls are reused verbatim for OpenAI / Grok / DeepSeek / Qwen, and normalized on-the-fly for Gemini. - No breaking changes for existing Claude users. The first load on an upgraded build finds the old key, moves it into the
anthropicslot, and keeps the provider on Anthropic by default.