A sophisticated AI-powered chat application with multi-model support, subscription management, and extensive customization options.
Features • Tech Stack • Getting Started • Project Structure • API Reference • Database Schema • Deployment
Lumy Alpha is a production-ready AI chat application built with Next.js 16 (App Router) that provides users with access to multiple AI models through a unified interface. The application features a smart model routing system, subscription-based access control, real-time streaming responses, and comprehensive user customization options.
- Multi-Model AI Integration - Seamlessly switch between different AI models optimized for various tasks
- Smart Model Routing - Context-aware routing that automatically selects the best model based on input
- Subscription System - Tiered pricing with token-based usage tracking and DodoPayments integration
- Real-time Streaming - Live AI responses with tool call support
- Voice Input - Speech recognition for hands-free interaction
- File Attachments - Upload and preview files in conversations
- Message Branching - Support for conversation threading and branching
| Model ID | Technical Model | Capabilities |
|---|---|---|
fast |
DeepSeek V3.2 | Standard mode, quick responses |
thinker |
DeepSeek V3.2 | Advanced reasoning enabled |
pro-thinker |
Moonshot Kimi K2.5 | Complex problem solving |
lumy-sense-1 |
Mistral Large 2512 | Vision/image analysis |
lumy-itor-1 |
Moonshot Kimi K2.5 | Code generation & analysis |
- Web Search - AI-powered search with fast/advanced modes
- View Website - Extract and analyze content from URLs
- Get Weather - Real-time weather information
- Get Time - Current time utility
- Authentication - Secure sign-in/sign-up with Clerk
- User Profiles - Customizable profiles with onboarding flow
- AI Customization - Personalize AI tone, response length, and custom instructions
- Privacy Settings - Control data sharing and preferences
- Usage Dashboard - Track token usage and subscription status
- Message Feedback - Rate and provide feedback on AI responses
| Plan | Price | Features |
|---|---|---|
| Free | $0 | Basic access with limited tokens |
| Plus | $10/mo | Increased limits, priority support |
| Pro | $25/mo | High limits, advanced features |
| Max | $50/mo | Unlimited access, premium support |
| Technology | Purpose |
|---|---|
| Next.js 16.1.4 | React framework with App Router |
| React 19.2.3 | UI library |
| TypeScript 5 | Type-safe development |
| Tailwind CSS 4 | Utility-first styling |
| Radix UI | Accessible UI primitives |
| Framer Motion | Animations and transitions |
| Zustand | Client-side state management |
| TanStack React Query | Server state management |
| Technology | Purpose |
|---|---|
| Clerk 7.0.1 | Authentication & user management |
| Supabase | PostgreSQL database |
| Vercel AI SDK 6.0.111 | AI integration framework |
| OpenRouter | AI model provider |
| Google AI | Additional AI provider |
| DodoPayments | Subscription & payment processing |
| Upstash Redis | Rate limiting |
| Technology | Purpose |
|---|---|
| ESLint | Code linting |
| Tailwind CSS PostCSS | CSS processing |
| Tw-animate-css | Tailwind animations |
- Node.js 18+
- npm, yarn, pnpm, or bun
- Supabase account
- Clerk account
- DodoPayments account (for subscriptions)
- OpenRouter API key
- Clone the repository
git clone <repository-url>
cd lumy-with-ai-sdk- Install dependencies
npm install
# or
yarn install
# or
pnpm install- Environment Variables
Create a .env.local file in the root directory:
# Clerk
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=
CLERK_SECRET_KEY=
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
# Supabase
NEXT_PUBLIC_SUPABASE_URL=
NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=
SUPABASE_SECRET_KEY=
# AI Providers
OPENROUTER_API_KEY=
GOOGLE_AI_API_KEY=
# DodoPayments
DODO_PAYMENTS_API_KEY=
DODO_PAYMENTS_WEBHOOK_SECRET=
# Upstash Redis (Rate Limiting)
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=- Database Setup
Run the SQL migrations in your Supabase dashboard:
# Navigate to database/migrations/ and run each SQL file in order:
# 001_complete_profiles_setup.sql
# 002_chat_history_setup.sql
# 003_message_feedback_setup.sql
# 004_user_settings_setup.sql
# 005_message_token_usage_log_setup.sql
# 006_subscription_system_setup.sql
# 007_tool_usage_log.sql
# 008_file_uploads.sql
# 009_signed_url_cache.sql- Run Development Server
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev- Open Application
Navigate to http://localhost:3000 in your browser.
lumy-with-ai-sdk/
├── ai-sdk-v6-docs/ # AI SDK v6 documentation
├── database/
│ └── migrations/ # Supabase SQL migrations
├── dev-docs/ # Development documentation
├── plans/ # Feature plans
├── public/ # Static assets
├── src/
│ ├── app/
│ │ ├── (auth)/ # Authentication pages
│ │ ├── (main)/ # Main app with sidebar
│ │ ├── api/ # API routes
│ │ ├── checkout/ # Payment checkout
│ │ ├── feedback/ # Feedback pages
│ │ ├── settings/ # Settings pages
│ │ └── upgrade/ # Pricing/upgrade page
│ ├── components/
│ │ ├── experimental-components/
│ │ ├── hjls-css-collection/
│ │ ├── messages/ # Message components
│ │ ├── ui/ # UI primitives
│ │ └── ... # Other components
│ ├── hooks/ # React hooks
│ ├── lib/
│ │ ├── allowance/ # Usage allowance logic
│ │ └── ... # Utilities
│ ├── providers/ # Context providers
│ ├── stores/ # Zustand stores
│ └── types/ # TypeScript types
├── components.json # shadcn/ui config
├── eslint.config.mjs # ESLint configuration
├── next.config.ts # Next.js configuration
├── package.json # Dependencies
├── postcss.config.mjs # PostCSS configuration
└── tsconfig.json # TypeScript configuration
| Endpoint | Method | Description |
|---|---|---|
/api/chat |
POST | Main chat endpoint with streaming |
/api/conversations |
GET/POST | Conversation management |
/api/conversations/[id] |
GET/PUT/DELETE | Individual conversation operations |
/api/messages |
GET/POST | Message operations |
/api/upload |
POST | File upload handling |
/api/attachments |
GET | Attachment URL resolution |
| Endpoint | Method | Description |
|---|---|---|
/api/profile |
GET/PUT | User profile management |
/api/settings |
GET/PUT | User settings CRUD |
/api/onboarding |
POST | User onboarding completion |
| Endpoint | Method | Description |
|---|---|---|
/api/subscription |
GET | Subscription status |
/api/checkout |
POST | Create checkout session |
/api/usage |
GET | Usage statistics |
/api/webhooks |
POST | DodoPayments webhooks |
| Endpoint | Method | Description |
|---|---|---|
/api/message-feedback |
POST | Submit message feedback |
| Table | Description |
|---|---|
profiles |
User profiles linked to Clerk authentication |
conversations |
Chat sessions with titles and timestamps |
messages |
Individual messages with branching support |
user_settings |
Privacy and AI customization settings |
user_subscriptions |
DodoPayments subscription data |
periodic_allowance |
Token usage tracking (6-hour windows) |
message_feedback |
User feedback on AI responses |
message_token_usage_log |
Detailed token usage logs |
file_uploads |
Attachment metadata |
signed_url_cache |
Cached signed URLs for attachments |
- Message Branching - Messages can reference previous messages via
previous_message_id - Sliding Window Allowance - 6-hour rolling windows for usage limits
- Soft Deletes -
deleted_atcolumns for data retention - Timestamps - UTC timestamps with automatic updates
The application uses a smart model routing system defined in src/lib/model-router.ts:
// Example model route configuration
{
uiModelId: 'fast',
technicalModel: 'deepseek/deepseek-chat-v3.2',
provider: 'openrouter',
reasoning: false
}Rate limiting is configured via Upstash Redis with the following defaults:
- Chat API: 20 requests per 6 hours
- Upload API: 10 requests per hour
- General API: 100 requests per hour
- Push your code to a Git repository
- Import the project in Vercel
- Configure environment variables
- Deploy
# Build the image
docker build -t lumy-alpha .
# Run the container
docker run -p 3000:3000 lumy-alpha# Build the application
npm run build
# Start the production server
npm startnpm run dev # Start development server
npm run build # Build for production
npm run start # Start production server
npm run lint # Run ESLintThe project uses ESLint with Next.js configuration. Run npm run lint to check for issues.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is private and proprietary. All rights reserved.
For support, please contact the development team or open an issue in the repository.
Built with Next.js, AI SDK, and ❤️