Feature request: Add a configuration option in ui and Environment variable for setting LLM inference timeout This would let us still use agentzero with larger models on slower hardware. Thanks!