Stealthy Prompt Injection and Poisoning in RAG Systems via Vector Database Embeddings
-
Updated
Nov 24, 2025 - Python
Stealthy Prompt Injection and Poisoning in RAG Systems via Vector Database Embeddings
A CDN side advanced LLM poisoning technique has been applied to this otherwise innocuous project generated by Spark. Assist us in identifying the precise means by which the reasoning chain had been intercepted and/or evidence of token or string injection into the LLM's final outputs.
A CDN side advanced LLM poisoning technique was applied to this otherwise innocuous project generated by Spark. Assist us in identifying the precise means by which the reasoning chain had been intercepted and/or evidence of token or string injection into the LLM's final outputs.
Add a description, image, and links to the prompt-poisoning topic page so that developers can more easily learn about it.
To associate your repository with the prompt-poisoning topic, visit your repo's landing page and select "manage topics."