I'd hate to be that guy... but an LLM might be useful #4019
Replies: 5 comments 1 reply
-
|
It's ok, I am usually that guy.
I am ok increasing scope if I think directionally it is the right choice. I still need to understand what you mean though.
Can you give me an example? For example, maybe there is a log error with "something happened in code on line XX". Then dozzle has a button that automatically copies the error and searches it using your choice of LLM and spits out possible solutions? Is that what you mean? So really the win here is copy and pasting to AI? Or do you mean something completely different where the whole scope of the logs, image and container details are sent to an agent to further process. There are so many options here that I think it would be good first understand the usecase. |
Beta Was this translation helpful? Give feedback.
-
|
I envisioned this:
The error tracking would be helpful in itself as a standalone feature to get a more focused look at what's going wrong, and the LLM component can be an additional addon by entering an API key and a base url for which model to use. I'm not really a developer so I often do this with logs anyway to learn what's going on and what might be potential debugging steps. I think it would be a neat feature to have this information at a glance, because I'm usually going to dozzle to debug or finding out what's wrong anyway |
Beta Was this translation helpful? Give feedback.
-
|
I use LLM to analyze logs. A simple prompt in Google Gemini can sometimes produce very helpful results.. For instance, my typical prompt would be: "Do you see anything unusual or alarming in these log entries?" And then I paste 100-200 log entries. It's a good idea to check for personally identifyable information like UserIDs and things like that which you don't want fed back into the model. I'm not sure I would use a LLM that sent all the log text (not cleaned) to some central server, but a locally running LLM that, for instance once per day, analyzed logs for that day and reported any suspicious/uncommon/alarming entries could be useful. How useful depends on the LLM's depth. But I do see the value. FWIW. |
Beta Was this translation helpful? Give feedback.
-
|
An exemple implementation of this is Gemini in Google Cloud, which can access some logs and answer questions about them. Though, this seems like a big feature requiring multiple intertwined and not-so-small changes:
This sounds like the kind of thing that would require a community member to step up to implement it after a design is selected. |
Beta Was this translation helpful? Give feedback.
-
I think this would be a valuable addition. I personally haven't found a lot of time to play around with Gemini API. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
It might be cool if unique error messages were stored and potential fixes were searched automatically via an LLM + Search so they can be reviewed later.
I understand this changes the scope of the project and would certainly impact the small footprint it has on a server.
Beta Was this translation helpful? Give feedback.
All reactions