When running the protein network analysis, you'll see 40+ error messages like:
❌ Error loading tools from category 'tool_discovery_agents': [Errno 2] No such file or directory...
❌ Error loading tools from category 'web_search_tools': [Errno 2] No such file or directory...
...
This is a ToolUniverse framework limitation, not a bug in our implementation:
- ToolUniverse reloads tools on EVERY tool call (4 times in our workflow)
- Each reload attempts to load ALL tool categories (100+)
- Missing optional tool files generate error messages to stdout
- Cannot be suppressed from user code
- ❌ Cluttered output: 40+ error lines obscure actual results
- ❌ Performance: Loading 1232 tools 4 times (~4-8 seconds overhead)
- ✅ Functionality: No impact - analysis works correctly despite warnings
# Suppress ToolUniverse warnings
python python_implementation.py 2>&1 | grep -v "Error loading tools"
# Or save clean output
python python_implementation.py 2>&1 | grep -E "(Phase|✅|🕸|🧬|🔗|Results)" > results.txtCreate missing placeholder files (prevents error messages):
cd src/tooluniverse/data/
for f in tool_discovery_agents web_search_tools package_discovery_tools \
pypi_package_inspector_tools drug_discovery_agents hca_tools \
clinical_trials_tools iedb_tools pathway_commons_tools biomodels_tools; do
echo "[]" > "${f}_tools.json"
doneimport sys
from io import StringIO
# Capture output
old_stdout = sys.stdout
sys.stdout = buffer = StringIO()
# Run analysis
result = analyze_protein_network(...)
# Restore and filter output
sys.stdout = old_stdout
output = buffer.getvalue()
clean_output = '\n'.join([
line for line in output.split('\n')
if 'Error loading tools' not in line
])
print(clean_output)This should be fixed in ToolUniverse core by:
- Caching loaded tools (don't reload on every call)
- Suppressing warnings for optional missing files
- Using proper logging levels (DEBUG vs ERROR)
Status: Framework limitation - workarounds required until fixed upstream.
ToolUniverse loads 1232 tools 4 separate times during analysis.
⚠️ Slow: 4-8 second overhead⚠️ Memory: 4x memory usage
None available - this is how ToolUniverse currently works. Each tool call triggers a reload.
ToolUniverse should cache loaded tools in memory across calls.
All parameter names are CORRECT:
protein_ids(notidentifiers) - ✅ Verified in Phase 2gene_names(plural) - ✅ Verified in Phase 2sasbdb_id- ✅ Verified in Phase 2
All 4 phases work correctly:
- Phase 1: 100% mapping success ✅
- Phase 2: Correct interaction retrieval ✅
- Phase 3: Valid enrichment analysis ✅
- Phase 4: Clean error handling ✅
TP53 analysis produces expected results:
- 10 high-confidence interactions (0.98-0.999)
- 374 enriched GO terms (p < 0.05)
- PPI enrichment highly significant (p=1.99e-06)