"20000 balls inside a heptagon" or a "native" tool calling and automatic feedback during inference #1253
magikRUKKOLA
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
output.webm
note. above is a result of DeepSeek-V3-0324 with native tool calling.
I had been watching the youtubers coming up with different complex prompts in order to make llm to produce a code which they will copy/paste, run, test, see for an error, copy/paste the error if present to induce llm to fix the code etc.
It seems to me that this process can be automated with quite promising results. For example, lets consider the prompt which assumes llm is induced to use execute_python function and have a feedback from it to do the chain-of-thouht/inference:
in order for the following to work the code have to be executed during the client's request and fed back to the llm for it to fix the bugs/adjust etc.
I have this implemented in lua with openresty modules within 1000 lines. Basically its a reverse-proxy which supports multiple backends openai/ollama compatible, a function call of any python code and the automatic additional tools injecting. That is, it will work with any client that supports ollama ( chatbox ai, charmbracelet-mods etc.). The sandboxing is implemented via firejail. Only Linux is supported.
Let me know if anyone have any interest. I might create a distro.
Beta Was this translation helpful? Give feedback.
All reactions