Skip to content

feat: PoC — Open Responses structured viewer + A2UI component renderer#1358

Open
sridhar-panigrahi wants to merge 1 commit intofoss42:mainfrom
sridhar-panigrahi:poc-open-responses-genui
Open

feat: PoC — Open Responses structured viewer + A2UI component renderer#1358
sridhar-panigrahi wants to merge 1 commit intofoss42:mainfrom
sridhar-panigrahi:poc-open-responses-genui

Conversation

@sridhar-panigrahi
Copy link
Contributor

This is the working proof-of-concept for my GSoC 2026 idea doc (PR #1321).

I've been exploring the response rendering pipeline for a while — the ResponseBodyView enum, how ResponseBodySuccess switches between views, how SSEDisplay handles streaming — and this PoC wires in two new view modes without touching any existing behaviour.


What's in here

packages/genai/lib/models/open_responses.dart
Sealed classes for all Open Responses output item types. MessageOutputItem, FunctionCallOutputItem, FunctionCallResultItem, ReasoningOutputItem. The detection logic is a single static method on OpenResponsesResult — checks for object == "response" and a non-empty output[] array. Nothing fancy, just makes the type system do the work downstream.

lib/widgets/open_responses_viewer.dart
Renders a parsed OpenResponsesResult as a scrollable list of typed cards:

  • Messages → role-labelled chat bubbles (user right, assistant left)
  • Reasoning → collapsed by default, shows summary, tap to expand the full trace
  • Function calls → expandable card with function name + pretty-printed args, status chip
  • Tool results → shows output linked to call_id
  • Token usage bar at the bottom (input · output · total)

lib/widgets/a2ui_renderer.dart
Two things here. A2UIParser scans a response body line-by-line for A2UI v0.9 JSONL messages (createSurface, updateComponents, updateDataModel) and builds a flat component map + data model. A2UIRenderer walks that map recursively and renders standard catalog components — Text, Button, Card, Row, Column, Image, Icon, Divider, List, Tabs. Data binding via JSON Pointer paths (/user/name) resolves from the data model at render time.

lib/widgets/response_body.dart
Extracted _resolveViewOptions() static method. Before falling through to the usual media-type routing, it probes the body: Open Responses check first, A2UI check second, then existing logic unchanged. Zero impact on existing flows.

lib/consts.dart
Two new ResponseBodyView values: structured (auto_awesome icon) and genui (widgets icon), plus their view option lists.


How to test

Open Responses: POST to https://api.openai.com/v1/responses with a valid key and any model that supports the Responses API. The response pane switches to "Structured" automatically. No need to set Content-Type manually — it reads application/json as usual.

A2UI: Paste a JSONL body with updateComponents messages directly into a request body and send to any endpoint that returns it, or use a mock server. The pane switches to "GenUI".

Both views have "Raw" as a fallback tab so you can still see the underlying JSON.


No new dependencies added. This is a PoC — the streaming path (processing SSE events into live-updating cards) and full A2UI data binding reactivity are the next steps I'd tackle during the project.

Signed-off-by: Shridhar Panigrahi sridharpanigrahi2006@gmail.com

Adds initial proof-of-concept for GSoC 2026 Idea 5 (Open Responses &
Generative UI). The goal is to stop showing raw JSON walls for structured
AI responses and instead render them as meaningful visual cards.

What this does:

- Introduces typed sealed classes for all Open Responses output item
  types (message, function_call, function_call_output, reasoning) so
  the response body can be pattern-matched cleanly instead of relying
  on string field access everywhere.

- Adds OpenResponsesViewer: a widget that renders each output item as a
  distinct card. Messages show as role-labelled chat bubbles, reasoning
  items show a summary by default with the full trace expandable on tap,
  function calls show the name + pretty-printed JSON args, and tool
  results are linked back to their call_id. Token usage sits at the
  bottom in a compact bar.

- Adds A2UIRenderer + A2UIParser: parses A2UI v0.9 JSONL payloads
  (createSurface / updateComponents / updateDataModel) into a flat
  component map and data model, then renders the tree recursively using
  a registry of standard components (Text, Button, Card, Row, Column,
  Image, Icon, Divider, List, Tabs). Data binding via JSON Pointer paths
  is resolved from the data model at render time.

- Extends ResponseBodyView with two new options: structured and genui.
  Detection lives in ResponseBody._resolveViewOptions() which probes
  the response body before falling through to existing media-type
  routing. Open Responses format is detected by checking object ==
  "response" and presence of output[] array. A2UI is detected by
  scanning body lines for createSurface / updateComponents keys.

The integration is minimal on purpose — no new dependencies, no changes
to the request pipeline, just new view options that activate when the
response body matches the relevant format.

To test: send a POST to any /v1/responses-compatible endpoint (e.g.
gpt-4o via api.openai.com/v1/responses) and the response pane will
switch to Structured view automatically. For A2UI, paste any JSONL body
with updateComponents messages and GenUI view activates.

Related: PR foss42#1321 (idea doc)

Signed-off-by: Shridhar Panigrahi <sridharpanigrahi2006@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants