Tag: generative AI security
-
Poisoned at Birth: The Hidden Dangers of Data Poisoning in Generative AI
Introduction: When the Seed Is Tainted In the world of generative AI, we often focus on runtime threats – prompt injection, model leaks, hallucinations. But what if the problem began before the model ever answered a question? When training or fine-tuning data is manipulated, the model is “poisoned at birth.” That means the flaw is…