Date: 2025-10-16 Issue: Raw API response still appearing despite fix Status: 🔍 DEBUGGING - Added logging to diagnose
After adding the console output fix, raw API responses are STILL being dumped:
╔═══ FINAL ANSWER ═══╗
║ 'choices' 'finish_reason' 'length' 'index' 'message' 'role' 'assistant' 'content' String theory...║
╚═════════════════════╝
This suggests that:
- Either
markdown_outputis empty/false → fallback toprint_json_output() - Or
result['result']structure is completely malformed
File: /home/joker/SynapticLlamas/main.py (lines 1356-1392)
Added comprehensive logging to track:
- What's in
markdown_outputbefore extraction attempts - What keys exist in
result['result']to understand structure - Which extraction path succeeds (Ollama vs OpenAI vs fallback)
- Why fallback happens if no content extracted
# Debug logging
logger.debug(f"DEBUG: markdown_output type: {type(markdown_output)}")
logger.debug(f"DEBUG: markdown_output length: {len(str(markdown_output)) if markdown_output else 0}")
logger.debug(f"DEBUG: result['result'] keys: {list(result['result'].keys()) if isinstance(result['result'], dict) else 'NOT A DICT'}")
# If no final_output, try to extract from nested structure
if not markdown_output or not isinstance(markdown_output, str):
# Try common response structures
result_data = result['result']
if 'message' in result_data and isinstance(result_data['message'], dict):
# Ollama response format
markdown_output = result_data['message'].get('content', '')
logger.info(f"✅ Extracted content from Ollama format (length: {len(markdown_output)} chars)")
elif 'choices' in result_data and isinstance(result_data['choices'], list):
# OpenAI response format
if len(result_data['choices']) > 0:
choice = result_data['choices'][0]
if 'message' in choice:
markdown_output = choice['message'].get('content', '')
logger.info(f"✅ Extracted content from OpenAI format (length: {len(markdown_output)} chars)")
if isinstance(markdown_output, str) and markdown_output:
logger.info(f"📄 Displaying markdown panel (length: {len(markdown_output)} chars)")
console.print(Panel(...))
else:
# Fallback to cleaned JSON output
logger.warning(f"⚠️ No markdown content found, falling back to JSON display")
logger.warning(f" markdown_output: {repr(markdown_output)[:100]}")
print_json_output(result['result'])When you run a query, check the logs for:
INFO - ✅ Extracted content from OpenAI format (length: 1234 chars)
INFO - 📄 Displaying markdown panel (length: 1234 chars)
WARNING - ⚠️ No markdown content found, falling back to JSON display
WARNING - markdown_output: ''
DEBUG - DEBUG: markdown_output type: <class 'str'>
DEBUG - DEBUG: markdown_output length: 0
DEBUG - DEBUG: result['result'] keys: ['choices', 'created', 'model', 'usage', ...]
Based on your output, the likely issue is:
The orchestrator is returning result['result'] as a raw API response dict instead of wrapping it properly:
# WRONG (what might be happening):
return {
'result': raw_api_response, # This is {'choices': [...], 'usage': {...}}
'metrics': {...}
}
# RIGHT (what should happen):
return {
'result': {
'final_output': extracted_content,
'metadata': {...}
},
'metrics': {...}
}The distributed orchestrator might be bypassing agents and calling LLM APIs directly, returning raw responses without post-processing.
Agents might be returning data in a new format that doesn't match extraction logic.
- Run a test query and check logs to see which path executes
- Check log output for the DEBUG messages showing
result['result']keys - Identify where
result['result']comes from in distributed_orchestrator.py - Fix the orchestrator to properly extract/wrap content before returning
cd /home/joker/SynapticLlamas
python main.py --interactive --distributed
# Run a query and check logs:
SynapticLlamas> Explain string theory
# Look for these log lines:
# DEBUG: result['result'] keys: [...]
# WARNING: No markdown content found...
# OR
# ✅ Extracted content from OpenAI format/home/joker/SynapticLlamas/main.py(lines 1359-1392)- Added debug logging before extraction
- Added info logging for successful extraction
- Added warning logging for fallback case
🔍 DEBUGGING - Logging added, waiting for next test run to diagnose root cause
The debug logs will tell us exactly why the extraction is failing and what structure result['result'] actually has.