Skip to content

When executing a test scenario using TestZeus Hercules (v0.0.40), the Planner Agent's response is not properly formatted as JSON. This leads to repeated warnings in response_parser.py. #48

Open
@Ashokaruldeva

Description

@Ashokaruldeva

Issue Summary
When executing a test scenario using TestZeus Hercules (v0.0.40), the Planner Agent's response is not properly formatted as JSON. This leads to repeated warnings in response_parser.py.

Image

🖥 System Specifications
OS: (Windows/Linux/macOS - specify exact version)
RAM: 32 GB
CPU: 24 Cores
Storage: 1 TB SSD
GPU: NVIDIA GeForce RTX 4060
Dedicated Memory: 8 GB
Shared Memory: 16 G

🔧 Software & Environment Details
Ollama Version: 0.6.0
TestZeus-Hercules Version: 0.0.40
Python Version: 3.11
Pip Version: 22.2.2
🤖 LLM Models Used
firefunction-v2:70b-q3_K_S
Mistral 7B
Mixtral 8x7B
Llama 3.2 70B
Gemma3 27B

Configuration Files - agent_llm_config.json
{
"ollama": {
"planner_agent": {
"model_name": "firefunction-v2:70b-q3_K_S",
"model_api_key": "",
"model_api_type": "ollama",
"model_client_host": "http://localhost:11434",
"model_native_tool_calls": true,
"model_hide_tools": "if_any_run",
"llm_config_params": {
"cache_seed": null,
"temperature": 0.0,
"top_p": 0.001
}
},
"nav_agent": {
"model_name": "firefunction-v2:70b-q3_K_S",
"model_api_key": "",
"model_api_type": "ollama",
"model_client_host": "http://localhost:11434",
"model_native_tool_calls": true,
"model_hide_tools": "if_any_run",
"llm_config_params": {
"cache_seed": null,
"temperature": 0.0,
"top_p": 0.001
}

    },
    "mem_agent": {
        "model_name": "firefunction-v2:70b-q3_K_S",
        "model_api_key": "",
        "model_api_type": "ollama",
        "model_client_host": "http://localhost:11434",
        "model_native_tool_calls": true,
        "model_hide_tools": "if_any_run",
        "llm_config_params": {
            "cache_seed": null,
            "temperature": 0.0,
            "top_p": 0.001
        }
    },
    "helper_agent": {
        "model_name": "firefunction-v2:70b-q3_K_S",
        "model_api_key": "",
        "model_api_type": "ollama",
        "model_client_host": "http://localhost:11434",
        "model_native_tool_calls": true,
        "model_hide_tools": "if_any_run",
        "llm_config_params": {
            "cache_seed": null,
            "temperature": 0.0,
            "top_p": 0.001
        }
    }
}

}

test.feature

Feature: Open Google homepage

Scenario: User opens Google homepage
Given I have a web browser open
When I navigate to "https://www.google.com"
Then I should see the Google homepage

🔹 Steps to Reproduce
Step 1: Set Environment Variables & Run the Test
Run the following command in the terminal/cmd:

 set AGENTS_LLM_CONFIG_FILE=opt\agent_llm_config\agent_llm_config.json && 
 set AGENTS_LLM_CONFIG_FILE_REF_KEY=ollama&& 
 set HEADLESS=false && 
 set RECORD_VIDEO=true && 
 set TAKE_SCREENSHOTS=true && 
 testzeus-hercules --input-file opt\input\test.feature --output-path opt\output --test-data-path opt\test_data -- 
 agents-llm-config-file %AGENTS_LLM_CONFIG_FILE% --agents-llm-config-file-ref-key 
 %AGENTS_LLM_CONFIG_FILE_REF_KEY%

❌ Actual Behavior:

The Planner Agent's response is not valid JSON, leading to multiple warnings in response_parser.py.
Test execution fails due to improperly formatted LLM responses.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinghelp wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions