Replies: 30 comments 1 reply
-
|
I am of course interested in everyone's thoughts, but @kriswest @robmoffat especially interested in yours :) |
Beta Was this translation helpful? Give feedback.
-
|
Not many takers yet then! I should add that MCP-UI is not the only new protocol to emerge that is likely to disrupt the UI space in general - with consequential impact on FDC3. AG-UIAG-UI - which standardizes from comms between chat frontends and AI agents - is largely complementary to MCP-UI. However, there’s appears to be a small amount of overlap for UI integration because AG-UI also supports the interesting concept of Frontend-Defined Tools. It is possible that companies already using FDC3 might consider using AG-UI Frontend-Defined Tools to invoke FDC3 API methods in their new chat-based AI workflows. In this scenario, certain FDC3 issues could potentially become important. Two issues that spring to mind are:
However, it should be borne in mind that companies outside the financial services sector using AG-UI Frontend-Defined Tools would of course look for other (non-FDC3) solutions to standardize tool definitions. Hence it seems likely that before long those other patterns / technologies will also be on the table for potential adoption by financial services companies. WebMCPWebMCP allows app developers to expose their web app functionality as tools, allowing invocation by AI agents. This is effectively means web apps can act as MCP servers implementing tools on the client-side instead of in the backend, and makes interaction between AI agents and web apps more intentional. Some details on WebMCP can be seen here:
As with MCP-UI and AG-UI, the prospect of WebMCP is also very likely to have implications for FDC3. Although I originally opened this FDC3 issue specifically to discuss the impact of MCP-UI, it would be instructive to consider the much wider issue of all new related / overlapping standards and technologies, and how FDC3 fits in / evolves / responds to the rise of AI workflows more broadly. Hence I’m going to update the title of the issue now :) |
Beta Was this translation helpful? Give feedback.
-
|
Update: OpenAI Apps SDKRelevant to this discussion, there’s now yet another solution in the AI/UI integration space in the form of OpenAI Apps SDK. Here’s a post I put together outlining some of the implications of the OpenAI announcement: The rapidly evolving AI / UI integration landscape is now really heating up And the links below are worth following to learn more. Videos: Articles and docs: |
Beta Was this translation helpful? Give feedback.
-
|
I should be able to devote some proper headspace to this after OSFF 🙄 |
Beta Was this translation helpful? Give feedback.
-
Thanks @robmoffat - let’s catch up after OSFF then 👍 |
Beta Was this translation helpful? Give feedback.
-
|
Discussing with someone right now - the idea of supplying context back to the LLM via FDC3 contexts is a good one. At the moment, MCP doesn't support async comms back. |
Beta Was this translation helpful? Give feedback.
-
Thanks @robmoffat - very interesting. I can certainly see scenarios where an FDC3 context could be used in such a workflow. Potentially also scenarios with UI components crafted so they can be integrated in both existing FDC3-based dashboards and also in new (separate) AI workflows. And even blended dashboards which also have a chat sidebar to offer an additional mechanism for controlling them.
Yes, you're absolutely right. At the moment with MCP-UI, you can make embedded UIs raise events to the (parent) chat UI hosting them, triggering a fresh prompt to the agent, which in turn allows the agent to then decide what to do next. The current limitation of MCP-UI in this area are that comms are unidirectional. Hence:
This is significantly different to how FDC3 works, where an app instance remains alive and can listen for subsequent intents / contexts. Plenty to discuss here! |
Beta Was this translation helpful? Give feedback.
-
|
See also a bit of discussion on the unidirectional nature of the MCP-UI comms on this comment on the modelcontextprotocol-community UI RFC: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
fyi i'm also introducing something that can reverse the model. |
Beta Was this translation helpful? Give feedback.
-
sampling would be the closest |
Beta Was this translation helpful? Give feedback.
-
so... you need a chrome extension? I don't know what a TabClientTransport is |
Beta Was this translation helpful? Give feedback.
-
|
sry I didn't separate my statement about what i was building from the link to the competing solution to the WebMCP proposal listed above by Derek. |
Beta Was this translation helpful? Give feedback.
-
@novavi i would list out this project separately to WebMCP but in a similar vein given the project started as a fork to the original idea, its mentioned in the articles |
Beta Was this translation helpful? Give feedback.
-
|
Some positive news from Apps SDK discord :
|
Beta Was this translation helpful? Give feedback.
-
|
The uni-directional.natute of the communication is interesting. I understand why they'd want to avoid creating a second channel of communication with the Agent, which is after all built and trained to respond to language... To do something else I assume you'd need a transformer architecture ready to receive other input types? Or to convert responses into language to feed back. However, subsequent responses would be then naturally be new UI, rather than updating or interacting with the one previously returned? Perhaps generating new UIs that can issue interop messages to interact with a UI from a previous response is a way to handle that? I take your point about subagent and multiple.UIs within a single window/ai chat @novavi. It is relevant to this use case. |
Beta Was this translation helpful? Give feedback.
-
|
Remind me - is this on the agenda for the meeting this afternoon? As @novavi points out, there is a huge amount going on in this space - and lots of churn - so I think the thing I am interested in is the "desktop experience": how does any of this impact the idea of a bunch of disparate apps on the user's desktop, all relaying information to that user? i.e. the core use-case of FDC3. Can we "do better" at solving the core use case of FDC3 by incorporating / integrating any of this stuff? Anyway, that's just me and what I'm thinking about wrt to this project right now. |
Beta Was this translation helpful? Give feedback.
-
|
@robmoffat Yes it was due to be on the agenda for the next Use Cases and Workflows meeting. @mistryvinay Is the next meeting in fact today, I couldn't seen an Issue raised for it? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@kriswest Yes, that would certainly be one way to handle it within the constraints of the current architecture. Of course it would be far more elegant if there was support to target tool calls at previously-returned UIs. But I would note that in both scenarios you'd really need a concept of instanceIds for the UIs (like we have in FDC3 apps) and these instanceIds would need to be generated and available at the agent level in order for subsequent responses to be able to use them effectively... |
Beta Was this translation helpful? Give feedback.
-
Nice one - thanks @mistryvinay ! |
Beta Was this translation helpful? Give feedback.
-
|
@robmoffat one thing that I think MCP-UI (and similar frameworks for rendering UI in agent's response window) does is to create multiple UIs within that one window (which is something we don't yet support well). The question is, should they all be the AI? or should they be entities in their own right (disparate apps that can communicate with each other as well as other apps)? You might want to pop apps out of the chat onto the desktop... I suppose one way to handle that would be for the AI to just go ahead and spawn them using i also like the idea of apps being able to raise queries with the AI through intents - is that as simple as forwarding a prompt (with attached FDC3 context if useful)? Its easy enough to provide instructions (in a prompt/persistent context) for handling or generating FDC3 context objects... |
Beta Was this translation helpful? Give feedback.
-
Thanks @kziemski fair point - and worth calling out and discussing / exploring. A Chrome Extensions seem like a valid way to explore a new idea in a reference implementation, but impractical for adoption within large enterprises for infosec reasons. So would be interesting to know what options they're looking at for fallback. |
Beta Was this translation helpful? Give feedback.
-
Thanks @nileshtrivedi - great context |
Beta Was this translation helpful? Give feedback.
-
Couldn't you just raise an intent to the AI window, allowing it render whatever it wants (has been trained to) in response to the request, which could in turn use interop (or allow the user to initiate interop) with other apps on the desktop? I.e. does it make more sense to think of the AI as platform hosting multiple UIs OR is it better to consider the AI an app in its own right (but one that could respond to many intents and can render in weird and wonderful ways)? The latter fits better with FDC3 today (but would need a few more intents and contexts maybe?) - it perhaps also fits better with the unidirectional nature of MCP-UI - the UI generated are ephemeral and not persistent entities in their own right - if you want those it could spawn them with a raised intent! |
Beta Was this translation helpful? Give feedback.
-
|
P.S. I get that MCP-UI currently isolates UIs i iframes, and hence, they can work as independent apps from FDC3's perspective (we don't currently need sub agents) but we do need some form of identity for them that way. However, I'm questioning whether that brings any advantage over consider 'the AI' to be the app. In Capital Markets use cases, I can't quite see the AI chat replacing all other UI any time soon, as it perhaps could in some other domains. So I think its a case of figuring out how FDC3 can facilitate its participation as part of a "bunch of disparate apps on the user's desktop, all relaying information to that user", as Rob put it. |
Beta Was this translation helpful? Give feedback.
-
@kriswest I actually think both of the above are probably valid scenarios |
Beta Was this translation helpful? Give feedback.
-
@kriswest Well this is an interesting point. Both MCP-UI and OpenAI's Apps SDK are particularly well suited to cross-vendor AI / UI integration. But I think as this space evolves further, there will be lots of demand to use similar techniques on a single-vendor basis. In those scenarios, mounting UIs in iframes becomes not just unnecessary but also likely a hindrance. I would concede those scenarios are not currently supported by MCP-UI (interestingly, I believe even if you use the RemoteDOM option where the UI content is rendered inline within the main window, MCP-UI's client library actually creates a hidden iframe as a sibling of the UI content containing some JavaScript). But other (non-MCP-UI) AI approaches such as AG-UI Frontend-Defined Tools can be used for single-vendor scenarios to avoid the iframe, and if there was to be any crossover / integration with FDC3 for these scenario, FDC3 sub-agents would then likely be quite useful.
Yes, I agree completely! And I do struggle when I hear people say they think all their UIs will be replaced with a chat interface :) I see AI more as augmenting UIs. This means conversational interfaces in some scenarios. But in many other scenarios it might mean something like an r.h.s. sidebar containing a chat-based interface as an additive change to a dashboard - allowing additional ways of interacting with individual apps, with groups of apps, and with the dashboard as a whole. This is where the AI / FDC3 crossover becomes interesting. Are the apps extended to support both FDC3 and (future-state) WebMCP? And/or are there scenarios where the response to a prompt needs to be translated to an FDC3 broadcast or intent? |
Beta Was this translation helpful? Give feedback.
-
|
@novavi I'm going to convert this issue to a discussion (assuming you don't object). We can then raise some concrete requests as issues to move things forward. |
Beta Was this translation helpful? Give feedback.
-
|
@kriswest Related to this discussion, I put together a small embryonic library called MCP-FDC3 yesterday, to help test out new ways in which FDC3 and MCP could potentially be used together in AI workflows. Unlike MCP-UI - which serves a fresh instance of an app in response to every user prompt - the pattern supported by MCP-FDC3 allows an MCP server tool to trigger events in existing (already-running) instances of apps in a frontend platform by automatically invoking FDC API methods. The library is still very early and incomplete, but please see the following post / article for details of MCP-FDC3 and link to repo: In the article I've also listed out many of the challenges I've uncovered when looking at how best to support AI / FDC3 integration (in particular, with fdc3.open and fdc3.raiseIntent). Happy to discuss further with anyone as and when... |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The rise of Agentic AI workflows seem poised to disrupt the way frontend applications are delivered. In particular, many organizations are now starting to look at how MCP-UI and similar approaches can be used to allow chat-based AI workflows to break out of primarily text-based responses and raise the bar on user experience.
I believe it is now incumbent upon us to consider and discuss what this means for the FDC3 Standard. Looking at FDC3 and MCP-UI, the following observations can be made:
fdc3.openandfdc3.raiseIntentmethods.postMessageapproach.MessageChannelimplementation to the one used in FDC3 v2.2'sgetAgentflow.postMessageapproach was used.What's interesting to me is how the FDC3 Standard now responds to the rise of Agentic AI workflows:
It should be borne in mind that advancements in the MCP space (and the AI space more generally) are progressing at a breakneck pace, not least because of the sheer amount of money being thrown at everything AI-related, and the fact that AI solves problems across all industries. Whereas FDC3 has of course been aimed squarely at the financial services sector because it lives within FINOS - even though theoretically it could be used to solve interop problems across all industries. This limits its mindshare and also means it evolves at a slower pace than other standards.
I'm very interested to hear thoughts about this from others in the FDC3 community. As I said, I think this important topic warrants further discussion - whether that's in the Standard Working Group, or even in a discussion group dedicated to this subject.
I have published an article as a way of organizing my initial thoughts on MCP-UI:
Deep Dive into MCP-UI: The Intersection of AI and UI
In that article, I also touch on on the complementary nature of MCP-UI and FDC3. There are of course a growing number of articles from elsewhere on MCP-UI, and I've linked to many of them at the bottom of that article.
Beta Was this translation helpful? Give feedback.
All reactions