Generative UI Agent is a small React + Vite project that demonstrates a composable UI for generating dynamic components via an LLM-backed service. It includes a lightweight component runner, example UI components, and a service wrapper for Google Gemini-like APIs.
- Minimal React + Vite setup (TypeScript-ready)
- Dynamic component rendering via
componentRunnerandDynamicComponent - Input UI component (
InputArea) for interacting with the agent geminiServicewrapper to integrate with Google GenAI / Gemini client- Ready-to-run dev and build scripts
- Node.js (16+ recommended)
- npm, pnpm or yarn
- API credentials for your LLM provider (if you intend to use the
geminiService)
Install dependencies:
npm install
# or
# pnpm install
# yarn installRun the development server:
npm run devBuild for production:
npm run buildPreview a production build:
npm run previewThe app runs on Vite (default http://localhost:5173). See package.json for scripts.
index.html,index.tsx,App.tsx— app entry and layoutcomponents/— UI componentsDynamicComponent.tsx— wrapper that renders dynamic component definitionsInputArea.tsx— input box / form for user prompts
services/— external services and API clientsgeminiService.ts— thin wrapper around Google GenAI / Gemini SDK
utils/componentRunner.ts— logic for transforming LLM responses into UI components
types.ts— shared TypeScript typesvite.config.ts,tsconfig.json— tooling config
-
The
geminiServiceuses@google/genai(seepackage.json). Configure API keys in your environment (for exampleGOOGLE_API_KEYor similar) before using it. -
componentRunnercontains the mapping from LLM responses to React elements. If you extend or change the component schema from your LLM prompts, update the runner accordingly.
- Start the dev server (
npm run dev). - Open the app in the browser and enter prompts in the input area. The app will call the service and attempt to render dynamic components returned by the model.
Note: Model outputs may require sanitization and validation before being rendered in production. This project is intended for experimentation and proof-of-concept only.
Contributions are welcome. Please open issues or pull requests to improve documentation, tests, or features.
This project is released under the MIT License. See LICENSE for details (add if needed).
Built as a small starter to experiment with LLM-driven UI generation.
If you'd like, I can also add operating instructions for any specific LLM provider or a sample .env example for local development.