No valid diffs found in response and programs not evolving #359
Replies: 6 comments
-
|
You need a model capable of generating diff and following instructions well. You can try setting diff_based_evolution to false to let the model generate the whole file. If the model you are using can generate diff in a different format you can update the prompt in https://github.com/codelion/openevolve/blob/main/openevolve/prompts/defaults/diff_user.txt and trying with it. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your reply, changing diff_based_evolution to false already improved the situation. I will take some more time for the prompt engineering and selection of the LLM. |
Beta Was this translation helpful? Give feedback.
-
|
can you give models that support diff? |
Beta Was this translation helpful? Give feedback.
-
|
most frontier models work well gemini pro, claude sonnet, gpt-5 usually this happens with cheaper/less capable models. you can still try and strengthen the prompt to try and force it to generate a valid diff. For open models try the coder variant like qwen-coder instead. |
Beta Was this translation helpful? Give feedback.
-
|
I tested with strong model Qwen3 Coder 480B A35B Instruct still face this issue. i think the prompting might need to be improved . |
Beta Was this translation helpful? Give feedback.
-
|
The default prompt template is in https://github.com/algorithmicsuperintelligence/openevolve/blob/main/openevolve/prompts/defaults/diff_user.txt you can use a custom one by defining it in a text file and adding to config There is an example of how to use custom templates in the prompt optimization config https://github.com/algorithmicsuperintelligence/openevolve/blob/main/examples/llm_prompt_optimization/config.yaml#L25C1-L26C29 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
I am trying to run your function_minimization example to see whether everything is working with my local models.
I slightly changed the config.yaml of the example and configured it to use one local ollama model (currently Codellama:34b).
I keep getting the error "No valid diffs found in response". After closer inspection, it seems like the LLM simply doesn't answer in the required format in those instances. In the other iterations that do not generate an error it seems like evolution is working but when I look at the code in the Visualizer there is no diff between the programs. So in the end my best program is again my initial program. When looking at the prompt in the visualizer I see that the LLM suggested changes, but they were not applied. However, the evaluated metrics between programs do change.
I might be looking at the wrong things in the Visualizer, as I do not see any "diff", only the "code" tab per program.
Any help is greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions