Skip to content

LLM sometimes generates incorrect Markdown strings, need to improve parse_args logic #81

@KaminariOS

Description

@KaminariOS

Describe the bug
LLM(I am using GPT-4-mini with Openrouter) sometimes generates commands like this:

  8   │   │   {
    1 │   │   │   "role": "assistant",
    2 │   │   │   "content": "``exec_shell(\"kubectl get pods -n test-social-network\")```"
    3 │   │   },
    4 │   │   {
    5 │   │   │   "role": "env",
    6 │   │   │   "content": "Error parsing response: No API call found!"
    7 │   │   },

Notice that there are only 2 ticks before the exec_shell. It repeats the same mistake for the entire run.

The logic for extracting code block:

    def extract_codeblock(self, response: str) -> str:
        """Extract a markdown code block from a string.

        Args:
            response (str): The response string.

        Returns:
            str: The extracted code block.
        """
        outputlines = response.split("\n")
        indexlines = [i for i, line in enumerate(outputlines) if "```" in line]
        if len(indexlines) < 2:
            return ""
        return "\n".join(outputlines[indexlines[0] + 1 : indexlines[1]])

The function will return an empty string from the above command, thus "No API call" error.

It does not always happen.

Instead of additional prompting, it is more robust to address this issue on the parser side.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions