Releases: TingjiaInFuture/allbemcp
Releases · TingjiaInFuture/allbemcp
v2.2.0
v2.1.0
Update pyproject.toml
v2.0.3
Update pyproject.toml
v2.0.2
Update pyproject.toml
v2.0.1
refactor(runtime): align tool registration with FastMCP 3.x FunctionT…
v2.0.0: The Universal Engine Evolution (Object Flow & Massive Perf Boost)
🎉 Welcome to allbemcp v2.0.0!
This is a massive milestone release that transforms allbemcp from a proof-of-concept into a production-grade Python-to-MCP engine. We have completely overhauled the AST analyzer, runtime server, and serialization engine to make it infinitely faster, memory-safe, and incredibly smart.
🔥 Epic New Features (The Magic)
- Seamless Object Flow (Cross-Tool Chaining): The LLM can now pass stored objects directly as arguments to other functions! If a tool requires a complex object, the LLM simply passes
{"arg": "obj_123"}and the runtime will automatically re-hydrate it into the actual Python object. - Constructor Factory Extraction: Object-Oriented libraries are now fully supported! Classes are automatically scanned and their
__init__methods are exposed ascreate_{class_name}tools, allowing the LLM to natively instantiate complex objects. - Incremental Module Caching: Introduced a blazing-fast
IncrementalCacheutilizing file fingerprinting (st_mtime_ns+st_size). Subsequent CLI runs (inspect,generate) on the same library are now near-instant. - Facade Pattern Support: Completely rewrote the
__all__gatekeeper logic. Libraries that expose inner-module APIs via root__init__.py(likepixrep,pandas) are now perfectly recognized and exported.
⚡ Massive Performance Leaps
- AST Memory Optimization: Replaced aggressive
ast.walkwith shallow body traversal. Memory consumption when analyzing giant libraries (e.g., pandas, numpy) has dropped by magnitudes, eliminating OOM crashes. - O(n) & HeapQ Algorithms: Upgraded the deduplication engine from O(n²) to O(n) using sets and immutable tuples. The adaptive filtering now uses a highly optimized global weighted
heapqallocation, making API selection significantly faster and fairer. - MRO Type Dispatch Caching: The Serialization engine now caches
__mro__lookups (_dispatch_cache), reducing the time complexity of high-frequency serialization to O(1). - Deep Parallel Scanning: Widened the thread-pool executor to scan nested submodules (
depth <= 1) concurrently without hitting Python Import Lock deadlocks.
🛡️ Robustness & Safety
- LRU Memory Management: The Runtime
_object_storeis now backed by anOrderedDictwith a maximum capacity. Old objects are safely evicted via LRU (Least Recently Used) policy. - Daemon Thread Cleanup: Implemented a robust background thread with
threading.Event()for TTL-based object garbage collection, preventing memory leaks in long-running servers. - Anti-Injection Code Generator: The MCP Server generator now safely writes tool definitions via
json.loads(repr(tools_json)), fully preventing syntax errors caused by weird docstring escape characters. - Runtime Telemetry: Added a built-in
get-call-statstool to monitor execution counts, error rates, and average latency of the generated MCP tools.
🐛 Critical Bug Fixes
- Fixed: The
No quality stats availablebug where strict root__all__checks were incorrectly filtering out 100% of valid APIs. - Fixed: A double JSON-encoding bug in
generator.pythat caused generated MCP servers to crash withAttributeErroron startup.
💡 Upgrade Note:
To utilize the new Object Flow and LRU memory management, please re-run allbemcp generate <your_library> to regenerate your existing MCP servers!