Skip to content

Venom is an experimental, local-first AI system designed to orchestrate agents, memory and decision logic in a controlled, auditable way.

License

Notifications You must be signed in to change notification settings

mpieniak01/Venom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1,785 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Venom v1.5 ๐Ÿ

Quick Validate GitGuardian OpenAPI Contract Quality Gate Status

Quality Signals

  • Quick Validate: fast GitHub checks (Python compile smoke, CI-lite dependency audit, frontend script checks).
  • GitGuardian: secret detection and leak prevention in repository history and pull requests.
  • OpenAPI Contract: validates OpenAPI export and TypeScript codegen synchronization.
  • Quality Gate Status: SonarCloud quality gate for backend and frontend.

| Dokumentacja w jฤ™zyku polskim

Venom is a local AI platform for engineering automation that combines agent orchestration, tool execution, and organizational memory in one operational environment. It is designed to shorten delivery time from task analysis and planning to implementation and quality control. With a local-first approach, teams keep stronger control over data, costs, and runtime predictability.

In practice, Venom acts as a decision-and-execution layer for technical teams: it automates repetitive work, structures project knowledge, and provides a consistent control point for runtime, configuration, and model governance. This makes scaling delivery easier without proportional growth in operational overhead.

Why it matters for business

  • Reduces end-to-end delivery time for technical tasks (plan + execute + verify).
  • Lowers operating cost with local runtime and provider control.
  • Keeps organizational knowledge through long-term memory and lessons learned.
  • Improves operational control: service status, configuration, and model governance.
  • Standardizes team workflows and QA expectations.

Key capabilities

  • ๐Ÿค– Agent orchestration - planning and execution through specialized roles.
  • ๐Ÿงญ Hybrid model runtime - Ollama / vLLM / cloud switching with local-first behavior.
  • ๐Ÿ’พ Memory and knowledge - persistent context, lessons learned, and knowledge reuse.
  • ๐ŸŽ“ Workflow learning - automation built from user demonstrations.
  • ๐Ÿ› ๏ธ Operations and governance - service panel, policy gate, and provider cost control.
  • ๐Ÿ”Œ Extensibility - local tools and MCP import from Git repositories.

Recent updates (2026-02)

  • Security and governance were hardened (Policy Gate, cost limits, fallback policies).
  • Operational layer was unified (Workflow Control Plane, config panel, runtime monitoring).
  • Quality and learning modules were strengthened (Academy, intent-router rollout, test-artifact policy).
  • Formal closure for 152 (Ollama 0.16.x) was completed with full evidence and PASS hard gates.
  • Runtime profiles/onboarding contract (light/llm_off/full) was implemented and stabilized in venom.sh launcher (PL/EN/DE + headless mode).
  • ADR-001 was accepted and RoutingDecision soft e2e contract was integrated (governance + policy + observability).
  • API Contract Wave-1 was completed: explicit response_model for system/api-map, memory response schemas, OpenAPI/codegen sync, and wave-based DI cleanup.
  • Added optional modules platform capability: custom modules can now be registered and enabled via environment-driven module registry.

Documentation

Start and operations

Architecture

Agents and capabilities

Quality and collaboration

UI preview

Knowledge Grid
Knowledge Grid
Memory and knowledge relation view.
Trace Analysis
Trace Analysis
Request flow and orchestration analysis.
Configuration
Configuration
Runtime and service management.
AI Command Center
AI Command Center
Operations console and work history.

Architecture

Project structure

venom/
โ”œโ”€โ”€ venom_core/
โ”‚   โ”œโ”€โ”€ api/routes/          # REST API endpoints (agents, tasks, memory, nodes)
โ”‚   โ”œโ”€โ”€ core/flows/          # Business flows and orchestration
โ”‚   โ”œโ”€โ”€ agents/              # Specialized AI agents
โ”‚   โ”œโ”€โ”€ execution/           # Execution layer and model routing
โ”‚   โ”œโ”€โ”€ perception/          # Perception (desktop_sensor, audio)
โ”‚   โ”œโ”€โ”€ memory/              # Long-term memory (vectors, graph, workflows)
โ”‚   โ””โ”€โ”€ infrastructure/      # Infrastructure (hardware, cloud, message broker)
โ”œโ”€โ”€ web-next/                # Dashboard frontend (Next.js)
โ””โ”€โ”€ modules/                 # Optional modules workspace (separate module repos)

Main components

1) Strategic layer

  • ArchitectAgent - breaks complex tasks into an execution plan.
  • ExecutionPlan - plan model with steps and dependencies.

2) Knowledge expansion

  • ResearcherAgent - gathers and synthesizes web knowledge.
  • WebSearchSkill - search and content extraction.
  • MemorySkill - long-term memory (LanceDB).

3) Execution layer

  • CoderAgent - generates code based on available knowledge.
  • CriticAgent - verifies code quality.
  • LibrarianAgent - manages files and project structure.
  • ChatAgent - conversational assistant.
  • GhostAgent - GUI automation (RPA).
  • ApprenticeAgent - learns workflows by observation.

4) Hybrid AI engine

  • HybridModelRouter (venom_core/execution/model_router.py) - local/cloud routing.
  • Modes: LOCAL, HYBRID, CLOUD.
  • Local-first: privacy and cost control first.
  • Providers: Ollama/vLLM (local), Gemini, OpenAI.
  • Sensitive data can be blocked from leaving local runtime.

5) Learning by demonstration

  • DemonstrationRecorder - records user actions (mouse, keyboard, screen).
  • DemonstrationAnalyzer - behavioral analysis and pixel-to-semantic mapping.
  • WorkflowStore - editable procedure repository.
  • GhostAgent integration - execution of generated workflows.

6) Orchestration and control

  • Orchestrator - core coordinator.
  • IntentManager - intent classification and path selection.
  • TaskDispatcher - routes tasks to agents.
  • Workflow Control Plane - visual workflow control.

7) The Academy

  • LessonStore - repository of experience and corrections.
  • Training Pipeline - LoRA/QLoRA fine-tuning.
  • Adapter Management - model adapter hot-swapping.
  • Genealogy - model evolution and metric tracking.

8) Runtime services

  • Backend API (FastAPI/uvicorn) and Next.js UI.
  • LLM servers: Ollama, vLLM.
  • LanceDB (embedded), Redis (optional).
  • Nexus and background tasks as optional processes.

Quick start

Path A: manual setup from Git (dev)

git clone https://github.com/mpieniak01/Venom.git
cd Venom
pip install -r requirements.txt
cp .env.example .env
make start

Path B: Docker script setup (single command)

git clone https://github.com/mpieniak01/Venom.git
cd Venom
scripts/docker/venom.sh

After startup:

  • API: http://localhost:8000
  • UI: http://localhost:3000

Most common commands

make start       # backend + frontend (dev)
make stop        # stop services
make status      # process status
make start-prod  # production mode

Frontend (Next.js - web-next)

The presentation layer runs on Next.js 15 (App Router, React 19).

  • SCC (server/client components) - server components by default, interactive parts as client components.
  • Shared layout (components/layout/*) - TopBar, Sidebar, status bar, overlays.

Frontend commands

npm --prefix web-next install
npm --prefix web-next run dev
npm --prefix web-next run build
npm --prefix web-next run test:e2e
npm --prefix web-next run lint:locales

Local API variables

NEXT_PUBLIC_API_BASE=http://localhost:8000
NEXT_PUBLIC_WS_BASE=ws://localhost:8000/ws/events
API_PROXY_TARGET=http://localhost:8000

Slash commands in Cockpit

  • Force tool: /<tool> (e.g. /git, /web).
  • Force provider: /gpt (OpenAI) and /gem (Gemini).
  • UI shows a Forced label when a prefix is detected.
  • UI language is sent as preferred_language in /api/v1/tasks.
  • Summary strategy (SUMMARY_STRATEGY): llm_with_fallback or heuristic_only.

Installation and dependencies

Requirements

Python 3.10+ (recommended 3.11)

Key packages

  • semantic-kernel>=1.9.0 - agent orchestration.
  • ddgs>=1.0 - web search.
  • trafilatura - web text extraction.
  • beautifulsoup4 - HTML parsing.
  • lancedb - vector memory database.
  • fastapi - API server.
  • zeroconf - mDNS service discovery.
  • pynput - user action recording.
  • google-genai - Gemini (optional).
  • openai / anthropic - LLM providers (optional).

Full list: requirements.txt

Running (FastAPI + Next.js)

Full checklist: docs/DEPLOYMENT_NEXT.md.

Development mode

make start
make stop
make status

Production mode

make start-prod
make stop

Lowest-memory configurations

Configuration Commands Estimated RAM Use case
Minimal make api ~50 MB API tests / backend-only
Light with local LLM make api + make ollama-start ~450 MB API + local model, no UI
Light with UI make api + make web ~550 MB Demo and quick UI validation
Balanced make api + make web + make ollama-start ~950 MB Day-to-day work without dev autoreload
Heaviest (dev) make api-dev + make web-dev + make vllm-start ~2.8 GB Full development and local model testing

Key environment variables

Full list: .env.example

Configuration panel (UI)

The panel at http://localhost:3000/config supports:

  • service status monitoring (backend, UI, LLM, Hive, Nexus),
  • start/stop/restart from UI,
  • realtime metrics (PID, port, CPU, RAM, uptime),
  • quick profiles: Full Stack, Light, LLM OFF.

Parameter editing

  • type/range validation,
  • secret masking,
  • .env backup to config/env-history/,
  • restart hints after changes.

Panel security

  • editable parameter whitelist,
  • service dependency validation,
  • timestamped change history.

Monitoring and environment hygiene

Resource monitoring

make monitor
bash scripts/diagnostics/system_snapshot.sh

Report (logs/diag-YYYYMMDD-HHMMSS.txt) includes:

  • uptime and load average,
  • memory usage,
  • top CPU/RAM processes,
  • Venom process status,
  • open ports (8000, 3000, 8001, 11434).

Dev environment hygiene (repo + Docker)

make env-audit
make env-clean-safe
make env-clean-docker-safe
CONFIRM_DEEP_CLEAN=1 make env-clean-deep
make env-report-diff

Docker package (end users)

Run with prebuilt images:

git clone https://github.com/mpieniak01/Venom.git
cd Venom
scripts/docker/venom.sh

Compose profiles:

  • compose/compose.release.yml - end-user profile (pull prebuilt images).
  • compose/compose.minimal.yml - developer profile (local build).
  • compose/compose.spores.yml.tmp - Spore draft, currently inactive.

Useful commands:

scripts/docker/venom.sh
scripts/docker/run-release.sh status
scripts/docker/run-release.sh restart
scripts/docker/run-release.sh stop
scripts/docker/uninstall.sh --stack both --purge-volumes --purge-images
scripts/docker/logs.sh

Runtime profile (single package, selectable mode):

export VENOM_RUNTIME_PROFILE=light   # light|llm_off|full
scripts/docker/run-release.sh start

llm_off means no local LLM runtime (Ollama/vLLM), but backend and UI can still use external LLM APIs (for example OpenAI/Gemini) after API key configuration.

Optional GPU mode:

export VENOM_ENABLE_GPU=auto
scripts/docker/run-release.sh restart

Quality and security

  • CI: Quick Validate + OpenAPI Contract + SonarCloud.
  • Security: GitGuardian + periodic dependency scans.
  • pre-commit run --all-files runs: block-docs-dev-staged, end-of-file-fixer, trailing-whitespace, check-added-large-files, check-yaml, debug-statements, ruff-check --fix, ruff-format, isort.
  • Extra hooks outside this command: block-docs-dev-tracked (stage pre-push) and update-sonar-new-code-group (stage manual).
  • pre-commit can auto-fix files; rerun it until all hooks are Passed.
  • Treat mypy venom_core as a full typing audit; the repository may include historical typing backlog not related to your change.
  • Local PR sequence:
source .venv/bin/activate || true
pre-commit run --all-files
make pr-fast
make check-new-code-coverage

Roadmap

โœ… v1.5 (current)

  • v1.4 features (planning, knowledge, memory, integrations).
  • The Academy (LoRA/QLoRA).
  • Workflow Control Plane.
  • Provider Governance.
  • Academy Hardening.

๐Ÿšง v1.6 (planned)

  • Background polling for GitHub Issues.
  • Dashboard panel for external integrations.
  • Recursive long-document summarization.
  • Search result caching.
  • Plan validation and optimization.
  • Better error recovery.

๐Ÿ”ฎ v2.0 (future)

  • GitHub webhook handling.
  • MS Teams integration.
  • Multi-source verification.
  • Google Search API integration.
  • Parallel plan step execution.
  • Plan caching for similar tasks.
  • GraphRAG integration.

Conventions

  • Code and comments: Polish or English.
  • Commit messages: Conventional Commits (feat, fix, docs, test, refactor).
  • Style: Black + Ruff + isort (via pre-commit).
  • Tests: required for new functionality.
  • Quality gates: SonarCloud must pass on PR.

Team

  • Development lead: mpieniak01.
  • Architecture: Venom Core Team.
  • Contributors: Contributors list.

Thanks

  • Microsoft Semantic Kernel, Microsoft AutoGen, OpenAI / Anthropic / Google AI, pytest, open-source community.

Venom - Autonomous AI agent system for next-generation automation

License

This project is distributed under the MIT license. See LICENSE. Copyright (c) 2025-2026 Maciej Pieniak

About

Venom is an experimental, local-first AI system designed to orchestrate agents, memory and decision logic in a controlled, auditable way.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •