Problem OpenEvolve appears to assume an OpenAI-compatible Chat Completions endpoint (POST /v1/chat/completions) and/or a “chat-style” client wrapper. This works with OpenAI-compatible servers, but not for LLM provider/APIs that only supports the OpenAI Responses API (POST /v1/responses) and does not implement /v1/chat/completions. As a result, OpenEvolve cannot connect to these providers without running a separate compatibility proxy.
Requested change Add first-class support for the Responses API, e.g.: A config switch like: llm.api_type: chat_completions | responses or llm.endpoint: /v1/chat/completions | /v1/responses If responses is selected: send requests to POST {api_base}/v1/responses map current chat payload into Responses format.