linkedin . email . repos . ko-fi
I build production LLM systems — voice agents, RAG pipelines, multi-agent backends — and the full-stack apps they live inside.
Recent work: real-time voice AI on Pipecat + WebRTC with multi-provider STT-LLM-TTS, LangGraph multi-agent backends with hybrid retrieval and HITL gating, HIPAA-compliant clinical AI, and a 50K-user crypto product ecosystem with smart-contract payouts.
- Languages —
Python·TypeScript·Rust - AI / Agents —
LangChain·LangGraph·Pipecat·LiveKit·Pydantic-AI·Firebase Genkit(used in TS or Python depending on the project) - Backend —
FastAPI·Node.js·asyncio·SQLAlchemy - Frontend —
Next.js·React·Tailwind·Tauri - Data —
PostgreSQL·Supabase·pgvector·Redis·MongoDB - Currently learning — voice-agent eval harnesses, real-time guardrails, MCP server patterns at scale, semantic chunking for legal/medical RAG
- Remote · open to senior IC roles + select contracts
|
Config-driven multi-agent voice orchestration. JSON-driven flows with runtime agent generation, intelligent handoff coordination, and cross-call memory.
|
Multi-modal AI media generation tool — Veo 3 video, Imagen images, MusicLM audio. Three interfaces: desktop GUI, CLI, REST API.
|
|
Professional multi-interface text-to-speech with Google Chirp 3 HD voices — 30+ voices, 28 languages. Desktop GUI, CLI, REST API.
|
Universal documentation-to-Markdown CLI for LLM context. Multi-strategy discovery (
|
|
|
README — How I work
- Async-first. Design doc → spike → benchmark → ship. I write down what I'd build before I build it.
- Evaluation before claims. Every retrieval / agent / latency claim in my repos has a numbers table to back it.
- Demos > slides. I'd rather send you a 30-second Loom than a deck.
- Open to senior IC roles in voice AI / LLM systems / full-stack AI products. Long-running contracts (3+ months) over week-long gigs.
The source for this README lives in /Abdulrahman-Elsmmany and updates daily via a GitHub Action.








