-
-
Notifications
You must be signed in to change notification settings - Fork 252
Description
A common practice in prompting is to ask the LLM to return what you want between XML tags.
For example, asking it to return the Laravel version of a project in tags you'd get <laravel_version>12.1</laravel_version>
In this case it would be super easy to extract only the Laravel version from the response.
But an issue arrises when a model starts to think in steps.
The moment a model (looking at you Gemini), outputs <laravel_version>12.1</laravel_version> in one of steps OTHER than the last step we cant rely on the ->text attribute of the Response class.
The openai-php/client package solves this by providing an ->outputText method which combines all the output of the model.
Not sure what the best approach would be for Prism here, but any kind of "just give me all the output as a string so I can extract something specific myself" method would be nice.
Having to glue together all steps all the time is quite a chore.