Skip to content

Prepending user prompt with the llm.py BASE_PROMPT breaks caching. #134847

Open
@jftkcs

Description

@jftkcs

The problem

The llm.py#58 BASE_PROMPT contains the dynamic date and time and is being prepended to the OpenAI Conversation prompt (conversation.py#193)which prevents any caching. This increases API costs and adds response latency.

Same issue as #133687.

In my case I forked the integration, removed the BASE_PROMPT, and manually put the date/time in the user provided prompt template at the end.

Realistically, I think the llm.py BASE_PROMPT is a foot gun for LLM integration developers and shouldn't exist as-is.

What version of Home Assistant Core has the issue?

core-2024.10.4

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant OS

Integration causing the issue

openai_conversation

Link to integration documentation on our website

https://www.home-assistant.io/integrations/openai_conversation

Diagnostics information

No response

Example YAML snippet

No response

Anything in the logs that might be useful for us?

No response

Additional information

No response

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions