Author: khirawdhi
-

AI Red Teaming: Breaking Your Models Before Attackers Do
How to stress-test, find, and fix the real vulnerabilities in your AI systems before someone else does. TL;DR AI red teaming is an adversarial, multidisciplinary practice that probes production and pre-production models to surface security, safety, privacy and misuse risks. It borrows from cyber red teams but expands to data, model artifacts, pre-trained components, prompt…
-

From DevSecOps to MLSecOps: Securing the AI Development Lifecycle
In recent years, organisations have matured their software-development practices through models like DevSecOps integrating security (“Sec”) into the development (Dev) + operations (Ops) lifecycle. Now, as artificial intelligence (AI) and machine-learning (ML) systems become core to business operations, a new discipline is emerging: MLSecOps (Machine Learning Security Operations). MLSecOps takes the DevSecOps ethos but extends…
-

Securing AI Plugins and Toolchains: Defense Beyond the Model
Introduction: The Model Isn’t the Only Attack Surface When we talk about securing generative AI, we often focus on the model itself its weights, its training data, its prompt vulnerabilities. But in modern systems the model is just one piece. Many solutions chain the model with plugins, APIs, orchestration layers, agent tools, and external services.…
-

Poisoned at Birth: The Hidden Dangers of Data Poisoning in Generative AI
Introduction: When the Seed Is Tainted In the world of generative AI, we often focus on runtime threats – prompt injection, model leaks, hallucinations. But what if the problem began before the model ever answered a question?When training or fine-tuning data is manipulated, the model is “poisoned at birth.” That means the flaw is baked…
-

Threat Modeling for Generative AI: A Practical, End-to-End Playbook
Generative AI changes how systems are attacked and defended. This hands-on playbook shows you how to threat-model GenAI products covering data pipelines, prompts, agents, plugins, and safety layers. You’ll get a step-by-step method, threat catalogs, sample scenarios, and concrete mitigations you can implement today without killing developer velocity. Why threat modeling for GenAI is different…