Any reason why chunking with llama vision model result in gibberish in the chunk ? #10155
Replies: 2 comments 1 reply
-
|
There are a few likely reasons why chunking with the llama vision model in RAGFlow v0.19.1 starts producing gibberish after initially working:
If the gibberish output started suddenly, double-check if the image format or chunking parameters changed, or if system resources are being maxed out. If possible, try removing base64 headers, adjusting chunking parameters, or testing with a different chunking method (like "Paper" instead of "General") to see if the output improves. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
How about changing to another model? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This is version v0.19.1, that was only working version for for Ollama integration, when I was using Ollama llama3.2-vision:11b as img2txt, It was working for the previous few images then it start producing gibberish. Any idea why it happen ?
Beta Was this translation helpful? Give feedback.
All reactions