• Blog
  • The 2025 State of Application Risk Report: Understanding AI Risk in Software Development

Blog

The 2025 State of Application Risk Report: Understanding AI Risk in Software Development

Get details on the AI risks Legit unearthed in enterprises' software factories.

Artificial intelligence has rapidly become a double-edged sword in application security. While AI tools offer developers unprecedented efficiency in code generation, they simultaneously introduce new vulnerabilities. The 2025 State of Application Risk report, based on data from the Legit Application Security Posture Management platform, sheds light on the types of AI-related risks in modern software development environments.

The AI visibility gap 

Much of the AI risk in software development stems from a visibility gap. We often discover that security teams first don’t know where AI is in use, and then find out it’s used in a location that isn’t configured securely.

Legit recently conducted a survey of 400 security professionals and software developers to understand the use and security of GenAI in software development. Ninety-eight percent of the survey respondents reported that security teams need a better handle on how GenAI-based solutions are being used in development.

 

Screenshot 2025-05-08 at 8.52.21 AM

 

AI toxic combinations

Research for the report found that a significant 71 percent of organizations are now using AI models in their source code development processes. At the same time, 46 percent of these organizations are employing AI models in risky ways.

 

apprisk-figure-5-1

 

In many cases, we found that enterprises are combining the use of these models with other risks, amplifying the vulnerability. For example, we often find developers using AI and generating code on a repository that doesn’t have a code review step. This could, for instance, allow for licensed code to enter the product, exposing the organization to legal or copyright issues.

One example: The report reveals that, on average, 17 percent of repositories within organizations have developers using AI tools without proper branch protection or code review processes in place.

 

apprisk-figure-7-branch-protection

 

This toxic combination of AI usage and lax security controls creates an environment ripe for introducing vulnerabilities or malicious code into production systems.

Another concerning trend we see is the use of low-reputation large language models (LLMs). These models may contain malicious code, hidden payloads, or could potentially exfiltrate sensitive data sent to them. The reputation and level of community adoption of third-party AI models serve as critical indicators of their reliability, effectiveness, and safety. Models with a low number of downloads, likes, or limited activity on their repositories may signal various concerns, from lack of effectiveness to potential security risks. This inspection is especially important in environments handling sensitive data or requiring high reliability. Recent examples from the open-source ecosystem show that poorly maintained projects don’t address security issues promptly and are prone to attack.  

 

Mitigating AI risk

To mitigate these emerging AI-related risks, Legit recommends several best practices:

  1. Conduct threat modeling specifically focused on AI-related threats to understand potential impacts.

  2. Prioritize security considerations when selecting AI models for use in development processes. Thoroughly evaluate the reputation of the models, the users endorsing them, and the organizations producing them. Either stop using low-reputation models and replace them with popular alternatives, or conduct a comprehensive analysis to make sure they are not malicious. 

  3. Implement tools and processes to gain comprehensive visibility into AI usage across the entire development environment. Which applications developed in my organization are employing GenAI? Which models are we downloading from community marketplaces like Hugging Face? In which repositories does it happen, and which of them are business-critical? Only after gaining complete visibility into your AI development, including using third-party models, will you be able to tackle risks that revolve around them. 

  4. Create and maintain clear policies governing the use of AI in development, including guidelines for selecting appropriate models and implementing necessary security controls.

Learn more

To get more details and analysis on the 2025 software risk landscape, download the the 2025 State of Application Risk report.

 

 

 

 

Share this guide

Published on
May 09, 2025

Get a stronger AppSec foundation you can trust and prove it’s doing the job right.

Request a Demo