Ell-ena is a sophisticated AI-powered product management system that automates task management, ticket creation, and meeting transcriptions while maintaining full work context. This document provides a comprehensive technical explanation of how Ell-ena works, its architecture, and its key components.
Ell-ena implements a modern architecture that combines Flutter for the frontend with Supabase for backend services, enhanced by AI processing pipelines:
┌─────────────────────────────────────────────────────────────────────────┐
│ FRONTEND (Flutter) │
├───────────────┬─────────────────┬────────────────────┬─────────────────┤
│ Auth Module │ Task Manager │ Meeting Manager │ Chat Interface │
└───────┬───────┴────────┬────────┴──────────┬─────────┴────────┬────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────┐
│ Supabase Service Layer │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Auth Client │ │ Data Client │ │Storage Client│ │ RPC Client │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │ │
└─────────┼─────────────────┼─────────────────┼─────────────────┼─────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────┐
│ BACKEND (Supabase) │
├───────────────┬─────────────────┬────────────────────┬─────────────────┤
│ Authentication│ PostgreSQL DB │ Object Storage │ Edge Functions │
└───────┬───────┴────────┬────────┴──────────┬─────────┴────────┬────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────┐
│ AI Processing Pipeline │
├───────────────┬─────────────────┬────────────────────┬─────────────────┤
│ NLU Processor │ Vector Database │ Embedding Generator│ AI Summarizer │
└───────────────┴─────────────────┴────────────────────┴─────────────────┘
One of Ell-ena's most powerful features is its automated meeting transcription and summarization system. Here's a detailed breakdown of how it works:
When a user schedules a meeting with a Google Meet URL, Ell-ena's system:
-
Monitors for meeting activity: The system detects when a meeting starts and ends via the Google Meet integration.
-
Captures audio: Using the Vexa API, Ell-ena captures the meeting audio in real-time.
-
Generates transcription: The Vexa API processes the audio and generates a detailed text transcription of the entire meeting conversation, including speaker identification.
-
Stores raw transcription: The raw transcription is stored in the Supabase database in the
meetingstable, linked to the specific meeting record.
Once a meeting has a transcription, an automated pipeline processes it:
-
Scheduled processing: A PostgreSQL cron job (
process_unsummarized_meetings) runs every minute to check for meetings with transcriptions but no summaries. -
Edge Function invocation: For each meeting needing processing, the system calls the
summarize-transcriptionEdge Function. -
AI processing: The Edge Function uses the Gemini API to:
- Analyze the meeting transcription
- Extract key topics, decisions, and action items
- Generate a structured summary with follow-up tasks
- Format the output as a structured JSON object
-
Summary storage: The AI-generated summary is stored in the
meeting_summary_jsoncolumn of themeetingstable.
After a meeting has been summarized, another pipeline creates semantic embeddings:
-
Scheduled embedding generation: A PostgreSQL cron job (
process_meetings_missing_embeddings) runs every 5 minutes to check for meetings with summaries but no embeddings. -
Edge Function invocation: For each meeting needing embeddings, the system calls the
generate-embeddingsEdge Function. -
Vector creation: The Edge Function uses Gemini's embedding-001 model to convert the meeting summary text into a 768-dimensional vector.
-
Vector storage: The embedding vector is stored in the
summary_embeddingcolumn (using PostgreSQL's vector extension) of themeetingstable.
When a user asks a question about past meetings:
-
Query detection: The AI service detects if the user's query is meeting-related using keyword analysis.
-
Query embedding: If meeting-related, the system generates a vector embedding for the user's query using the
get-embeddingEdge Function. -
Vector similarity search: The system performs a cosine similarity search against all meeting summary embeddings to find the most relevant meetings.
-
Context enrichment: The relevant meeting information is included as context in the prompt to the AI, enabling it to provide accurate, contextual responses.
-
Response generation: The Gemini model generates a response that incorporates the relevant meeting information, creating the impression of "memory" about past discussions.
Ell-ena provides sophisticated task and ticket management capabilities:
Users can create tasks using natural language commands through the chat interface:
-
Intent recognition: The AI service recognizes when a user is trying to create a task or ticket.
-
Function calling: The AI generates a structured function call to
create_taskorcreate_ticketwith appropriate parameters. -
Parameter extraction: The system extracts relevant details like title, description, due date, priority, and assignee from the user's natural language input.
-
Task creation: The Supabase service creates the task/ticket record in the database with the extracted parameters.
-
Real-time updates: The UI updates in real-time to show the newly created task/ticket.
Ell-ena enriches tasks with contextual information:
-
Team member awareness: The system understands team member names and can assign tasks appropriately.
-
Date interpretation: Natural language date references like "tomorrow" or "next week" are automatically converted to proper date formats.
-
Priority inference: For tickets, the system infers appropriate priority levels based on the context of the request.
-
Category assignment: Tickets are automatically categorized based on their content.
Ell-ena can automatically generate tasks from meeting summaries:
-
Action item extraction: The AI identifies action items and follow-up tasks from meeting transcriptions.
-
Structured data creation: These are stored as structured data in the meeting summary JSON.
-
One-click conversion: Users can convert these items to formal tasks or tickets with a single click from the meeting details screen.
-
Automatic assignment: The system attempts to assign tasks to the appropriate team members based on the meeting context.
Ell-ena implements a sophisticated multi-account login system:
-
Team creation: Users can create new teams with unique team codes.
-
Team joining: Users can join existing teams using team codes.
-
Role-based access: Users are assigned roles (admin or member) that determine their permissions.
-
Multi-team support: Users can belong to multiple teams and switch between them.
-
Data isolation: Each team's data is completely isolated using Supabase's Row-Level Security policies.
-
Permission enforcement: Database policies ensure users can only access data from their own teams.
-
Role-based permissions: Certain actions (like approving tasks or deleting meetings) are restricted to admin users.
The dashboard provides an at-a-glance view of:
-
Task summary: Shows pending, in-progress, and completed tasks.
-
Upcoming meetings: Displays scheduled meetings with quick-join links.
-
Recent activity: Shows recent updates across the team.
-
Team member status: Indicates which team members are active.
The calendar screen offers:
-
Meeting visualization: Shows all scheduled meetings in a calendar view.
-
Task due dates: Displays task deadlines alongside meetings.
-
Quick scheduling: Allows users to create new meetings by selecting time slots.
-
Meeting details: Provides quick access to meeting information and join links.
The AI-powered chat interface:
-
Natural language interaction: Allows users to interact with the system using everyday language.
-
Function detection: Automatically detects when users want to create tasks, tickets, or meetings.
-
Context awareness: Maintains conversation context and understands references to previous messages.
-
Meeting memory: Can recall and reference information from past meeting transcriptions.
The meeting details screen provides:
-
Basic information: Shows meeting title, description, date, time, and duration.
-
Join link: Offers a direct link to join virtual meetings.
-
Transcription status: Indicates whether transcription is pending, in progress, or complete.
-
AI summary: Displays the AI-generated summary of the meeting.
-
Action items: Shows extracted action items with the ability to convert them to tickets.
-
Follow-up tasks: Lists follow-up tasks with the ability to convert them to formal tasks.
The Flutter frontend is organized into:
-
Screens: UI components for different app sections (auth, tasks, meetings, etc.).
-
Services: Business logic modules that interact with the backend.
supabase_service.dart: Handles all Supabase interactionsai_service.dart: Manages AI processing and function callingmeeting_formatter.dart: Formats meeting data for displaynavigation_service.dart: Manages app navigation
-
Widgets: Reusable UI components shared across the app.
The Supabase backend consists of:
-
Database Schema: Tables for users, teams, tasks, tickets, meetings, etc.
-
Edge Functions: Serverless functions for AI processing:
fetch-transcript: Retrieves meeting transcriptionsgenerate-embeddings: Creates vector embeddings for meeting contentget-embedding: Retrieves embeddings for specific contentsearch-meetings: Performs semantic search across meeting transcriptionsstart-bot: Initializes the AI assistantsummarize-transcription: Generates AI summaries of meeting transcriptions
-
SQL Functions: Database functions for various operations:
process_unsummarized_meetings: Processes meetings with transcriptions but no summariesprocess_meetings_missing_embeddings: Processes meetings with summaries but no embeddingssearch_meeting_summaries: Performs semantic search to find relevant meeting summaries
-
Cron Jobs: Scheduled tasks that run automatically:
- Process unsummarized meetings (runs every minute)
- Generate embeddings for meetings (runs every 5 minutes)
The Google Gemini API is used for:
- Natural language understanding: Processing user queries and commands
- Function calling: Detecting when to create tasks, tickets, or meetings
- Meeting summarization: Generating structured summaries from transcriptions
- Vector embeddings: Creating semantic embeddings for meeting content
- Contextual responses: Generating responses that incorporate meeting context
To illustrate how all components work together, here's a complete lifecycle of a meeting in Ell-ena:
-
Meeting Creation:
- User creates a meeting via chat or calendar interface
- System stores meeting details in the database
- Google Meet URL is generated and stored
-
Meeting Occurrence:
- User joins the meeting via the stored URL
- Vexa API captures the audio and generates a transcription
- Transcription is stored in the database
-
Automated Processing:
- Cron job detects the meeting has a transcription but no summary
summarize-transcriptionEdge Function is called- Gemini API analyzes the transcription and generates a structured summary
- Summary is stored in the database
-
Embedding Generation:
- Cron job detects the meeting has a summary but no embedding
generate-embeddingsEdge Function is called- Gemini embedding-001 model creates a vector embedding
- Embedding is stored in the database
-
User Query:
- User asks "What did we decide about the marketing budget last week?"
- System detects this is a meeting-related query
get-embeddingEdge Function creates an embedding for the query- Vector similarity search finds the most relevant meeting summaries
- Relevant meeting information is included in the AI prompt
- Gemini generates a response that incorporates the meeting context
-
Task Creation:
- User clicks on an action item from the meeting summary
- System creates a new ticket with details from the action item
- Ticket is assigned to the appropriate team member
- Team member receives notification about the new ticket
This end-to-end flow demonstrates how Ell-ena combines real-time transcription, AI processing, vector search, and task management to create a seamless, context-aware productivity system.
Ell-ena represents a sophisticated integration of modern technologies to create an AI-powered productivity assistant that truly understands context and helps teams work more efficiently. By combining Flutter's cross-platform UI capabilities, Supabase's powerful backend services, and Google Gemini's advanced AI capabilities, Ell-ena delivers a seamless experience that feels like working with a smart teammate rather than just another tool.
The system's ability to automatically transcribe meetings, generate summaries, extract action items, and later recall this information when needed represents a significant advancement in AI-assisted productivity tools. The context-aware task and ticket creation further enhances team efficiency by reducing manual data entry and ensuring important details aren't lost.