Get details on how AI is introducing new risk to software.
The bottom line is that AI means more code, a lot more code. And more code that is not at the quality level of human-generated code. Why? For one, because it’s trained on code from across the Internet, not all of it high-quality or secure. In addition, AI generators could, and have, created risks like data exposure, supply chain security issues, or the introduction of outdated, vulnerable, or malicious libraries.
At the same time, while the number of lines of code has and is exploding, the level of human capacity for oversight and review has stayed the same. So we’ve got more code, with more problems, and less oversight. In addition, the nature of AI-generated code itself introduces risk. The problem isn’t just that AI is “new.” It’s that AI models — and the ecosystems they live in — behave fundamentally differently than traditional code and services.
Risks AI brings to software
A few key reasons AI introduces fresh risk:
Opaque behavior
Unlike traditional software, AI outputs aren’t always predictable. Attackers can manipulate inputs (prompt injection) or outputs (data poisoning) in ways traditional security models can’t catch easily.
Expanded supply chains
Organizations now consume pre-trained models, APIs, and AI services from third parties, without always validating their provenance, security practices, or hidden risks.
Secret leakage and IP exposure
AI systems trained on internal or external datasets can inadvertently leak sensitive data, secrets, or intellectual property, either through direct queries or indirect model behaviors.
Shadow AI and rogue deployments
Teams are integrating AI models and APIs outside of centralized governance. If you thought shadow IT was bad, wait until you see shadow AI.
And AI’s attack surface isn’t just theoretical anymore. We’re seeing early examples of model manipulation attacks, prompt injections into AI-driven apps, poisoned training data, and malicious model updates starting to crop up, and it’s only going to get worse.
Gartner recently stated that, “Through 2029, over 50% of successful cybersecurity attacks against AI agents will exploit access control issues, using direct or indirect prompt injection as an attack vector.” (Gartner, “Hype Cycle for Application Security, 2025,” July 22, 2025)
On “vibe coding,” in which the AI prompts are 100% natural language, Gartner recommends companies “limit it to a controlled, safe sandbox for execution. Do not use vibe-coded software in your production efforts until the tools mature further.” Further, Gartner said:
“The risks are substantial if developers dive in unprepared or use these tools independently.” (Gartner, “Why Vibe Coding Needs to be Taken Seriously,” May 20, 2025)
Gartner also notes that “By 2027, at least 30% of application security exposures will result from usage of vibe coding practices.” (Gartner, “Hype Cycle for Application Security, 2025,” July 22, 2025)
And the risk is not just in the code, the tools and services used in AI code generation bring risk into your environment as well.
Risks of coding assistants
Vulnerabilities have been discovered in coding assistants themselves — for example, in early 2025, the Legit Security research team found a significant vulnerability in coding assistant GitLab Duo. The AI assistant integrated into GitLab and powered by Anthropic’s Claude contained a remote prompt injection vulnerability that could allow attackers to steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities — all through GitLab Duo Chat.
In addition, the code AI coding assistants produce and integrate into the application may also be susceptible to vulnerabilities. Specifically, vulnerabilities often come from insecure suggested code being copy/pasted without human validation. In fact, Cursor has a “Yolo” mode that allows AI to run commands without asking for approval.
Risks of LLM agents for coding workflows
LLM agents introduce unique security risks, especially due to their autonomous and multi-step behavior. Specifically, chained autonomous behaviors can escalate risk — such as using external tools, downloading libraries, running commands, or self-modifying code.
In 2023, researchers reported Auto-GPT, an autonomous AI agent, could be manipulated through indirect prompt injection to execute arbitrary code. In one instance, when tasked with summarizing content from an attacker-controlled website, Auto-GPT was tricked into executing malicious commands.
Learn more
AI brings incredible opportunity to software development, but it’s important to understand the risks.
Learn more about how AI is affecting software development and steps you can take to lower your risk in our new guide, AppSec in the Age of AI.