Roadmap-Hinweis: Vage Bullets ohne Akzeptanzkriterien in Checkbox-Tasks überführen. Format:
- [ ] <Task> (Target: <Q/Jahr>).
Production-ready for LLM-assisted AQL query generation, natural language to AQL translation, and documentation assistance. Core AQL parsing and execution are handled by the query module. Phase 4 adds a generic AQLTokenStream streaming interface and a ReActAgent multi-step reasoning framework with tool calling.
- LlmAqlHandler for INFER, RAG, EMBED, MODEL, and LORA command processing
- Natural language to AQL translation via LLM integration
- AQL documentation assistant for function lookup and explanation
- Query explanation and profiling assistance
- LLM command handler infrastructure (request routing, response parsing)
- Support for multi-paradigm AQL (documents, graphs, vectors, geospatial, timeseries)
- Integration with OpenAI, Anthropic, Azure OpenAI, and llama.cpp providers
- AQL syntax highlighting and error annotation in LLM responses (
AQLSyntaxHighlighter) (Issue: #1353) - Confidence scoring for generated AQL queries (
LLMAQLHandler::translateNLToAQLWithConfidence,AQLConfidenceScorer) (Issue: #1357) - Multi-turn conversation context for iterative query refinement (
LLMAQLHandler::executeChat) (Issue: #1358) - AQL auto-complete API for editor integrations (LSP-compatible) (Issue: #1359)
- AQL query migration assistant (ArangoDB AQL → ThemisDB AQL) (Issue: #1360)
- Schema-aware query generation using live collection metadata (Issue: #1361)
- AQL function documentation auto-generation from C++ headers (Issue: #1362)
- Fine-tuned local model (LoRA adapter) for ThemisDB-specific AQL (Issue: #1363)
- Integration with query optimizer for cost-aware suggestions (Issue: #1364)
- Few-shot example library for improved NL-to-AQL accuracy (Target: Q3 2026) (Issue: #1521)
- Streaming natural language responses for long AQL explanations (Target: Q2 2026) (Issue: #1950) —
POST /api/v1/llm/aql/explain/streamSSE endpoint exposingLLMAQLHandler::streamExplainAQLAsSSE() - AQL query validation and linting before LLM submission (
src/aql/aql_query_validator.cpp) (Issue: #1525) - Query template library for common AQL patterns (
src/aql/aql_query_template_library.cpp) - Schema-aware programmatic AQL query builder (
src/aql/aql_query_builder.cpp) - LLM inference metrics collection (
src/aql/llm_metrics_collector.cpp) - Post-generation AQL validation with selectable enforcement (
WARN_ONLY/REJECT_ON_ERROR/RETRY_ON_ERROR) inLLMAQLHandler::translateNLToAQL*(src/aql/llm_aql_handler.cpp) - Thread-leak fix in
LLMTimeoutManager::executeWithTimeout()/executeWithCancelToken()usingstd::jthread+ stop-token cancellation (include/aql/llm_timeout_manager.h) - Per-operation circuit breakers for INFER/RAG/EMBED/FINETUNE with per-command config and observability (
LLMAQLHandler::getCircuitBreakerStates) - Bounded conversation history with context-window budget (
AQLConversationContext::Config{max_turns,max_history_tokens}+ token-based eviction) - Generic
AQLTokenStreamiterator API for all LLM inference calls (include/aql/aql_token_stream.h) (Phase 4) - ReActAgent multi-step reasoning framework with tool calling (
src/aql/aql_agent.cpp) (Phase 4) - Runtime-configurable confidence scoring weights (
AQLConfidenceScorer::Config,calibrate(), word-boundary keyword matching) (Target: v1.6.0) (Issue: #144)
(no items currently in progress)
-
Generic
AQLTokenStreamiterator API for all LLM inference calls (Target: v1.7.0)- Files:
include/aql/aql_token_stream.h(header-only),tests/test_aql_token_stream.cpp - Behavior: thread-safe queue with
push(token)/close()from producer thread;nextToken()blocking pop and range-based for-loop from consumer thread;cancel()with cooperative cancellation flag (isCancelled()) - Errors: push after cancel is silently discarded;
nextToken()after cancel/close returnsstd::nullopt; destructor callsclose()to unblock any waiting consumer - Tests: unit (single-threaded push+drain, ordering, empty stream, cancel, idempotent close, concurrent producer/consumer 100 tokens, range-based for loop, cancel-mid-stream)
- Perf:
push()andnextToken()overhead ≤ 500 ns excluding model generation time; no busy-wait (condition variable) - Compat: header-only, C++17; no external dependencies beyond STL
- Files:
-
AQL Agent Framework – ReAct (Reasoning + Acting) multi-step agent with tool calling (Target: v1.7.0)
- Files:
include/aql/aql_agent.h,src/aql/aql_agent.cpp,tests/test_aql_agent.cpp - Types:
AgentTool(name, description, JSON Schema, executor),AgentConfig(model, max_iterations, temperature),ReasoningStep(thought, tool_name, tool_input, tool_output, observation),AgentResult(final_answer, reasoning_trace, iterations_used, succeeded),IAgent(abstract),ReActAgent(Pimpl concrete implementation) - Behavior: iterates Thought→Action→Observation cycles up to
max_iterations; stops when LLM emits "Final Answer:" prefix; tool executor errors are captured as JSON and fed back as observations (never propagate to caller) - Errors: duplicate tool registration →
std::invalid_argument; unknown tool removal →std::invalid_argument; LLM failure →LLMException(INFERENCE_FAILED); max iterations reached →AgentResult.succeeded = falsewith descriptive message - Tests: unit (register/remove/duplicate tools, config, execute with no model, execute with max iterations, execute with tool invocation, move semantics)
- Perf: tool dispatch overhead ≤ 1 ms per step excluding tool execution; no heap allocation in reasoning-step parsing hot path
- Files:
- Multi-modal AQL agent inputs (image/audio/video via
MultiModalInferRequest) (Target: v1.8.0) -
IAsyncLLMBackend– non-blockingstd::future<Result<T>>inference interface (Target: v1.8.0)
- LlmAqlHandler for INFER, RAG, EMBED, MODEL, and LORA command processing (
src/aql/llm_aql_handler.cpp) - Natural language to AQL translation via LLM integration
- AQL documentation assistant for function lookup and explanation
- Query explanation and profiling assistance
- LLM command handler infrastructure (request routing, response parsing)
- Multi-paradigm AQL support: documents, graphs, vectors, geospatial, timeseries
- Provider integration: OpenAI, Anthropic, Azure OpenAI, llama.cpp
- AQL query validation and linting (
src/aql/aql_query_validator.cpp) - Schema-aware programmatic AQL query builder (
src/aql/aql_query_builder.cpp) - Query template library for common AQL patterns (
src/aql/aql_query_template_library.cpp) - Token-level autocompletion / LSP-compatible suggestions (
src/aql/aql_autocomplete.cpp) - Multi-turn conversation context (
src/aql/aql_conversation_context.cpp)
- Batch NL-to-AQL translation for offline workloads (Issue: #1356)
- Generic
TokenStreamiterator API for all LLM inference calls (Target: v1.7.0)include/aql/aql_token_stream.h– header-only, thread-safe push/pop/cancel/range-fortests/test_aql_token_stream.cpp– 14 unit tests covering single-threaded, concurrent, cancel, and range-for scenarios
- AQL Agent Framework – ReAct multi-step agent with tool calling (Target: v1.7.0)
include/aql/aql_agent.h–AgentTool,AgentConfig,ReasoningStep,AgentResult,IAgent,ReActAgentsrc/aql/aql_agent.cpp– Pimpl ReActAgent implementationtests/test_aql_agent.cpp– 17 unit tests covering registration, config, execute, move semantics
- Multi-modal agent inputs (image/audio/video) (Target: v1.8.0)
include/aql/multimodal_infer_request.h–ModalityType,MultiModalInput(MIME-validated),MultiModalInferRequesttests/test_aql_multimodal.cpp– 28 unit tests covering construction, MIME validation, helpers
-
IAsyncLLMBackendasync interface (Target: v1.8.0)include/aql/iasync_llm_backend.h– pure abstractIAsyncLLMBackend+ThreadPoolAsyncLLMBackendadaptertests/test_aql_async_backend.cpp– 11 unit tests covering construction, async inference, error propagation
- Feature 8 – Semantic Few-Shot Selection (
IEmbeddingProviderinterface,setEmbeddingProvider()/rebuildEmbeddingIndex()onAQLFewShotExampleLibrary)include/aql/aql_fewshot_example_library.h–IEmbeddingProviderabstract interface, new methods + semantic private helperssrc/aql/aql_fewshot_example_library.cpp– cosine-similarity ranking with Jaccard fallback, lazy embedding cache
- Feature 10 – Deduplicated Prompt-Building helpers in
LLMAQLHandlerinclude/aql/llm_aql_handler.h– private static helpers:buildNLToAQLSystemPrompt(),stripMarkdownFences(),logAnnotations()src/aql/llm_aql_handler.cpp– refactored all three translate methods to use shared helpers
- Feature 12 – Schema-aware
AQLQueryValidator::validate(query, schema)overloadinclude/aql/aql_query_validator.h– includesaql_schema_provider.h; newvalidate(string, vector<CollectionMetadata>)overload + private helperssrc/aql/aql_query_validator.cpp–checkUnknownCollections()andcheckUnknownFields()implementations
- Feature 13 – Runtime-overridable
ValidationLimitsConfig+setValidationLimits()/setTimeoutConfig()include/aql/llm_error_codes.h– newValidationLimitsConfigstruct (runtime version ofValidationLimitsnamespace)include/aql/llm_aql_handler.h/src/aql/llm_aql_handler.cpp–setValidationLimits(),getValidationLimits(),setTimeoutConfig()
- Feature 14 – Named LoRA hyperparameter constants +
Config::fromOptions()factoryinclude/aql/aql_lora_finetuner.h–kDefaultRank,kDefaultAlpha, etc.static constexprmembers;fromOptions()declarationsrc/aql/aql_lora_finetuner.cpp–fromOptions()with range validation for all hyperparameters
- Feature 15 –
DocsAssistantFunctionsdegraded-mode reportinginclude/aql/docs_assistant_functions.h–DegradedReasonenum,isFullyReady(),degradedReason()declarationssrc/aql/docs_assistant_functions.cpp–Impltracksdegraded_reason_+degraded_message_; emitsspdlog::warnbefore degrading
- Unit tests coverage > 80% (42 unit tests in few-shot library + 3 performance benchmarks + 7 integration tests + 13 injection tests + 1 highlighter path integration test in handler + 14 token-stream tests + 17 agent tests + 28 multimodal tests + 11 async-backend tests)
- Integration tests (handler ↔ highlighter path covered)
- Performance benchmarks (few-shot library: findRelevant/buildPromptSection timing tests added; AQLSyntaxHighlighter, AQLConfidenceScorer, and AQLFewShotExampleLibrary benchmarks implemented in
benchmarks/bench_hybrid_aql_sugar.cpp, Issue: #1523) - Security audit (prompt injection prevention via
sanitizePromptInput()intranslateNLToAQL(),translateNLToAQLStreaming(), andtranslateNLToAQLWithExamples(); AgentTool executor exceptions are caught and returned as JSON error objects) - Documentation complete (README.md and ROADMAP.md updated; FUTURE_ENHANCEMENTS.md Implementation Notes added)
- API stability guaranteed (Issue: #1524)
- NL-to-AQL accuracy depends on LLM provider quality and prompt engineering
- No offline fallback when no LLM provider is configured
- Prompt injection in
translateNLToAQL()is mitigated by pattern-based input sanitization; advanced adversarial inputs not covered by the current pattern set may still bypass detection - Schema-aware generation is supported: attach a metadata snapshot via
AQLQueryBuilder::setSchema()(include/aql/aql_query_builder.h); schema context is injected automatically into LLM prompts and unknown collection names are flagged as validation warnings
- LLM command handler API is stable; new command types will be additive
- Confidence scoring API was introduced as an optional field (non-breaking)
Stand: 2026-04-20 – Quelle: src/UNUSED_FUNCTIONS_REPORT.md
ReActAgent– LLM-gesteuerter Reasoning+Action Agent für mehrstufige AQL-AbfragenAktion: ROADMAP-Ticket für Produktions-Integration ergänzen oder als CANDIDATE_FOR_REMOVAL markieren.