Quality Signals
- Quick Validate: fast GitHub checks (Python compile smoke, CI-lite dependency audit, frontend script checks).
- GitGuardian: secret detection and leak prevention in repository history and pull requests.
- OpenAPI Contract: validates OpenAPI export and TypeScript codegen synchronization.
- Quality Gate Status: SonarCloud quality gate for backend and frontend.
Venom is a local AI platform for engineering automation that combines agent orchestration, tool execution, and organizational memory in one operational environment. It is designed to shorten delivery time from task analysis and planning to implementation and quality control. With a local-first approach, teams keep stronger control over data, costs, and runtime predictability.
In practice, Venom acts as a decision-and-execution layer for technical teams: it automates repetitive work, structures project knowledge, and provides a consistent control point for runtime, configuration, and model governance. This makes scaling delivery easier without proportional growth in operational overhead.
- Reduces end-to-end delivery time for technical tasks (plan + execute + verify).
- Lowers operating cost with local runtime and provider control.
- Keeps organizational knowledge through long-term memory and lessons learned.
- Improves operational control: service status, configuration, and model governance.
- Standardizes team workflows and QA expectations.
- ๐ค Agent orchestration - planning and execution through specialized roles.
- ๐งญ Hybrid model runtime - Ollama / vLLM / cloud switching with local-first behavior.
- ๐พ Memory and knowledge - persistent context, lessons learned, and knowledge reuse.
- ๐ Workflow learning - automation built from user demonstrations.
- ๐ ๏ธ Operations and governance - service panel, policy gate, and provider cost control.
- ๐ Extensibility - local tools and MCP import from Git repositories.
- Security and governance were hardened (
Policy Gate, cost limits, fallback policies). - Operational layer was unified (
Workflow Control Plane, config panel, runtime monitoring). - Quality and learning modules were strengthened (
Academy, intent-router rollout, test-artifact policy). - Formal closure for 152 (Ollama 0.16.x) was completed with full evidence and PASS hard gates.
- Runtime profiles/onboarding contract (
light/llm_off/full) was implemented and stabilized invenom.shlauncher (PL/EN/DE + headless mode). - ADR-001 was accepted and
RoutingDecisionsoft e2e contract was integrated (governance + policy + observability). - API Contract Wave-1 was completed: explicit
response_modelforsystem/api-map, memory response schemas, OpenAPI/codegen sync, and wave-based DI cleanup. - Added optional modules platform capability: custom modules can now be registered and enabled via environment-driven module registry.
- Deployment + startup - Development/production startup flow and runtime requirements.
- Configuration panel - What can be edited from UI and safe editing rules.
- Frontend Next.js -
web-nextstructure, views, and implementation standards. - API traffic control - Global anti-spam/anti-ban model for inbound and outbound API traffic.
- System vision - Target platform direction and product assumptions.
- Backend architecture - Backend modules, responsibilities, and component flows.
- Hybrid AI engine - LOCAL/HYBRID/CLOUD routing and local-first policy.
- Workflow Control - Workflow control model, operations, and guardrails.
- System agents catalog - Agent roles, inputs/outputs, and runtime cooperation.
- The Academy - Learning, tuning, and training data operationalization.
- Optional Module Guide - How to author, register, and enable optional modules for Venom core.
- Memory layer - Vector/graph memory organization and retrieval rules.
- External integrations - GitHub/Slack and other integration setup.
- Coding-agent guidelines - Mandatory agent workflow and quality gates.
- Contributing - Contribution process and PR expectations.
- Testing policy - Test scope, validation commands, and quality requirements.
- QA Delivery Guide - Delivery checklist from validation to release readiness.
venom/
โโโ venom_core/
โ โโโ api/routes/ # REST API endpoints (agents, tasks, memory, nodes)
โ โโโ core/flows/ # Business flows and orchestration
โ โโโ agents/ # Specialized AI agents
โ โโโ execution/ # Execution layer and model routing
โ โโโ perception/ # Perception (desktop_sensor, audio)
โ โโโ memory/ # Long-term memory (vectors, graph, workflows)
โ โโโ infrastructure/ # Infrastructure (hardware, cloud, message broker)
โโโ web-next/ # Dashboard frontend (Next.js)
โโโ modules/ # Optional modules workspace (separate module repos)
- ArchitectAgent - breaks complex tasks into an execution plan.
- ExecutionPlan - plan model with steps and dependencies.
- ResearcherAgent - gathers and synthesizes web knowledge.
- WebSearchSkill - search and content extraction.
- MemorySkill - long-term memory (LanceDB).
- CoderAgent - generates code based on available knowledge.
- CriticAgent - verifies code quality.
- LibrarianAgent - manages files and project structure.
- ChatAgent - conversational assistant.
- GhostAgent - GUI automation (RPA).
- ApprenticeAgent - learns workflows by observation.
- HybridModelRouter (
venom_core/execution/model_router.py) - local/cloud routing. - Modes: LOCAL, HYBRID, CLOUD.
- Local-first: privacy and cost control first.
- Providers: Ollama/vLLM (local), Gemini, OpenAI.
- Sensitive data can be blocked from leaving local runtime.
- DemonstrationRecorder - records user actions (mouse, keyboard, screen).
- DemonstrationAnalyzer - behavioral analysis and pixel-to-semantic mapping.
- WorkflowStore - editable procedure repository.
- GhostAgent integration - execution of generated workflows.
- Orchestrator - core coordinator.
- IntentManager - intent classification and path selection.
- TaskDispatcher - routes tasks to agents.
- Workflow Control Plane - visual workflow control.
- LessonStore - repository of experience and corrections.
- Training Pipeline - LoRA/QLoRA fine-tuning.
- Adapter Management - model adapter hot-swapping.
- Genealogy - model evolution and metric tracking.
- Backend API (FastAPI/uvicorn) and Next.js UI.
- LLM servers: Ollama, vLLM.
- LanceDB (embedded), Redis (optional).
- Nexus and background tasks as optional processes.
git clone https://github.com/mpieniak01/Venom.git
cd Venom
pip install -r requirements.txt
cp .env.example .env
make startgit clone https://github.com/mpieniak01/Venom.git
cd Venom
scripts/docker/venom.shAfter startup:
- API:
http://localhost:8000 - UI:
http://localhost:3000
make start # backend + frontend (dev)
make stop # stop services
make status # process status
make start-prod # production modeThe presentation layer runs on Next.js 15 (App Router, React 19).
- SCC (server/client components) - server components by default, interactive parts as client components.
- Shared layout (
components/layout/*) - TopBar, Sidebar, status bar, overlays.
npm --prefix web-next install
npm --prefix web-next run dev
npm --prefix web-next run build
npm --prefix web-next run test:e2e
npm --prefix web-next run lint:localesNEXT_PUBLIC_API_BASE=http://localhost:8000
NEXT_PUBLIC_WS_BASE=ws://localhost:8000/ws/events
API_PROXY_TARGET=http://localhost:8000- Force tool:
/<tool>(e.g./git,/web). - Force provider:
/gpt(OpenAI) and/gem(Gemini). - UI shows a
Forcedlabel when a prefix is detected. - UI language is sent as
preferred_languagein/api/v1/tasks. - Summary strategy (
SUMMARY_STRATEGY):llm_with_fallbackorheuristic_only.
Python 3.10+ (recommended 3.11)
semantic-kernel>=1.9.0- agent orchestration.ddgs>=1.0- web search.trafilatura- web text extraction.beautifulsoup4- HTML parsing.lancedb- vector memory database.fastapi- API server.zeroconf- mDNS service discovery.pynput- user action recording.google-genai- Gemini (optional).openai/anthropic- LLM providers (optional).
Full list: requirements.txt
Full checklist: docs/DEPLOYMENT_NEXT.md.
make start
make stop
make statusmake start-prod
make stop| Configuration | Commands | Estimated RAM | Use case |
|---|---|---|---|
| Minimal | make api |
~50 MB | API tests / backend-only |
| Light with local LLM | make api + make ollama-start |
~450 MB | API + local model, no UI |
| Light with UI | make api + make web |
~550 MB | Demo and quick UI validation |
| Balanced | make api + make web + make ollama-start |
~950 MB | Day-to-day work without dev autoreload |
| Heaviest (dev) | make api-dev + make web-dev + make vllm-start |
~2.8 GB | Full development and local model testing |
Full list: .env.example
The panel at http://localhost:3000/config supports:
- service status monitoring (backend, UI, LLM, Hive, Nexus),
- start/stop/restart from UI,
- realtime metrics (PID, port, CPU, RAM, uptime),
- quick profiles:
Full Stack,Light,LLM OFF.
- type/range validation,
- secret masking,
.envbackup toconfig/env-history/,- restart hints after changes.
- editable parameter whitelist,
- service dependency validation,
- timestamped change history.
make monitor
bash scripts/diagnostics/system_snapshot.shReport (logs/diag-YYYYMMDD-HHMMSS.txt) includes:
- uptime and load average,
- memory usage,
- top CPU/RAM processes,
- Venom process status,
- open ports (8000, 3000, 8001, 11434).
make env-audit
make env-clean-safe
make env-clean-docker-safe
CONFIRM_DEEP_CLEAN=1 make env-clean-deep
make env-report-diffRun with prebuilt images:
git clone https://github.com/mpieniak01/Venom.git
cd Venom
scripts/docker/venom.shCompose profiles:
compose/compose.release.yml- end-user profile (pull prebuilt images).compose/compose.minimal.yml- developer profile (local build).compose/compose.spores.yml.tmp- Spore draft, currently inactive.
Useful commands:
scripts/docker/venom.sh
scripts/docker/run-release.sh status
scripts/docker/run-release.sh restart
scripts/docker/run-release.sh stop
scripts/docker/uninstall.sh --stack both --purge-volumes --purge-images
scripts/docker/logs.shRuntime profile (single package, selectable mode):
export VENOM_RUNTIME_PROFILE=light # light|llm_off|full
scripts/docker/run-release.sh startllm_off means no local LLM runtime (Ollama/vLLM), but backend and UI can still use external LLM APIs (for example OpenAI/Gemini) after API key configuration.
Optional GPU mode:
export VENOM_ENABLE_GPU=auto
scripts/docker/run-release.sh restart- CI: Quick Validate + OpenAPI Contract + SonarCloud.
- Security: GitGuardian + periodic dependency scans.
pre-commit run --all-filesruns:block-docs-dev-staged,end-of-file-fixer,trailing-whitespace,check-added-large-files,check-yaml,debug-statements,ruff-check --fix,ruff-format,isort.- Extra hooks outside this command:
block-docs-dev-tracked(stagepre-push) andupdate-sonar-new-code-group(stagemanual). pre-commitcan auto-fix files; rerun it until all hooks arePassed.- Treat
mypy venom_coreas a full typing audit; the repository may include historical typing backlog not related to your change. - Local PR sequence:
source .venv/bin/activate || true
pre-commit run --all-files
make pr-fast
make check-new-code-coverage- v1.4 features (planning, knowledge, memory, integrations).
- The Academy (LoRA/QLoRA).
- Workflow Control Plane.
- Provider Governance.
- Academy Hardening.
- Background polling for GitHub Issues.
- Dashboard panel for external integrations.
- Recursive long-document summarization.
- Search result caching.
- Plan validation and optimization.
- Better error recovery.
- GitHub webhook handling.
- MS Teams integration.
- Multi-source verification.
- Google Search API integration.
- Parallel plan step execution.
- Plan caching for similar tasks.
- GraphRAG integration.
- Code and comments: Polish or English.
- Commit messages: Conventional Commits (
feat,fix,docs,test,refactor). - Style: Black + Ruff + isort (via pre-commit).
- Tests: required for new functionality.
- Quality gates: SonarCloud must pass on PR.
- Development lead: mpieniak01.
- Architecture: Venom Core Team.
- Contributors: Contributors list.
- Microsoft Semantic Kernel, Microsoft AutoGen, OpenAI / Anthropic / Google AI, pytest, open-source community.
Venom - Autonomous AI agent system for next-generation automation
This project is distributed under the MIT license. See LICENSE.
Copyright (c) 2025-2026 Maciej Pieniak



