Prompt Injection Isn't a Chatbot Problem Anymore
The project behind this article is pydefend on GitHub - Apache 2.0, contributions welcome. For a while, prompt injection was mostly embarrassing. You'd get a customer service bot to say something i...

Source: DEV Community
The project behind this article is pydefend on GitHub - Apache 2.0, contributions welcome. For a while, prompt injection was mostly embarrassing. You'd get a customer service bot to say something it shouldn't, or you'd extract the system prompt and post it on Twitter. Real issues, sure, but the consequences were bounded. The bot said a bad thing. Someone screenshotted it. Life went on. That era is ending. The shift isn't a new attack technique. It's a new target. As LLM applications move from "chat interface" to "agent with tools," the threat model changes completely - and most of the security thinking around prompt injection hasn't caught up. What changes when the AI can act Here's the difference in concrete terms. A chatbot that's been successfully injected might leak its system prompt, or produce output that contradicts its guidelines. Annoying. Potentially damaging to trust. But the blast radius is limited to what it says. An agent that's been successfully injected can act. It has