• Blog
  • When AI Writes the Code, What Changes for Security?

Blog

When AI Writes the Code, What Changes for Security?

Book a Demo

 

Last week at RSAC 2026, we hosted four security leaders for a breakfast panel to talk about what’s actually happening inside their organizations.

The conversation went further than the expected “AI is moving fast, security needs to keep up” discussion we’re all familiar with. We were fortunate to have panelists - Temi Adebambo, CISO at Microsoft Gaming; James Robinson, CISO at Netskope; Jim Routh, former CISO at MassMutual and CVS Aetna, among others; and Sean Poris, Head of Global AppSec at a large consultancy – who went beyond the surface level to dismantle the assumptions and raise some new perspectives.

Thanks to all four for a genuinely useful hour.

panel

 

The control point has already moved

The working model for AppSec has always been the developer. Train them, guide them and build controls around their workflows. That model, which assumed a reasonably defined group of people writing code, is gone.

In many organizations, anyone can build a working app in two minutes using natural language. Not a prototype, but a “product” that can move surprisingly close to production. New hires on one security team were shipping agents by the end of their second week.

"The thought that you're even going to get them educated around security concepts to the point where they're applying it," one panelist said. "I think it's probably the furthest thing from what's going to happen."

You can't scale a training-based security model to an organization where every employee is effectively a developer. If that’s the case, the only viable path is fully integrating security into the system (tools, assistants, workflows). In other words, security must be part of these processes rather than layered on as a separate step.

 

Software is becoming disposable and security is not prepared

As one panelist shared, in a single week a team spun up 500 agents. These agents did their respective jobs and then were gone, never to be seen again.

Most of our controls, particularly in AppSec (e.g., scan, review, approval gates), assume the software will endure for some significant period of time. You have control over the code, scan it regularly and fix issues that are uncovered. Rinse and repeat, weekly, monthly, quarterly, or whatever policy dictates.

But with disposable code, how do you govern and secure software that is gone by the time you’ve turned toward it?

Most teams don’t have a real answer yet, but one place to start: establish security guardrails at the agent-class level to inform what an agent can or cannot do. Further, agent identity and permissions management should also be solved structurally at the point of creation, rather than attempting to do so after the fact.

Temi Sean

 

Security can't operate as a gate

One panelist used an analogy to help frame security in the context of AI.

Imagine an Amazon warehouse, fully automated. Every station, every conveyor, every handoff sees machines interacting with machines at production speed. Now imagine inserting a manual checkpoint. Inserting a human in the process would not only slow things down but also run the risk of personal injury.

The implication is straightforward: if security has to operate inside the AI development system at production speed, it can’t be a gate. For the first time, he argued, we have a real chance to ship security at the same pace as everything else. That has never been true before.

 

The IR problem nobody has solved

One of the more uncomfortable moments came when one panelist described bringing a new question to the incident response team.

If an agent goes rogue, taking an action it shouldn't, accessing something it shouldn't – how would they know if it was involved and how could the team reconstruct the scenario?

When he first raised it, the IR team heard a different question: not “how do we investigate a rogue agent?” but “how do we use AI as part of our investigations?”

"What I'm asking is: if we have an agent that goes wrong, do we have the telemetry? Do we have the traceability? And if the agent is gone, what then?"

Most organizations do not know.

 

When the human role shifts

The closing predictions produced the panel's only real disagreement – and it was a good one.

One panelist made the historical case for optimism: while nearly every new technology revolution comes with predictions of job losses, these expectations nearly always fail to materialize. From the loom to the telephone to the automobile, each innovation generated more jobs than those that were eliminated. And AI will likely prove to be similar by driving massive business growth we can’t begin to anticipate.

Another panelist disagreed, at least partially. His distinction wasn't about whether jobs disappear overall, but about who might be at risk. For those that entered software development because the field is lucrative, but without passion for building, AI may turn out to be a replacement. And those that don’t adapt and grow to leverage AI may be in a perilous situation.

Finally, two panelists aligned on one theme: value is shifting from execution and toward strategic direction. Writing code is getting cheaper and faster. Knowing what to build, how to frame problems, how to give systems the context they need to produce good outcomes is where the real value will lie.

"Roles will involve execution becoming trivial," one speaker said. "So it's not about execution anymore. It's going to be more about people who can think, have ideas, apply structured logical reasoning.”

The teams navigating this well have one thing in common: they aren’t necessarily the ones with the most advanced policies. They're the ones that made security easy enough that people want to use it – and instrumented enough that they can see what's actually happening, not just what they assumed would happen.

 

Get a stronger AppSec foundation you can trust and prove it’s doing the job right.

Request a Demo
See the Legit AI-Native ASPM Platform in Action

Find out how we are helping enterprises like yours secure AI-generated code.

Demo_ASPM
Need guidance on AppSec for AI-generated code?

Download our new whitepaper.

Legit-AI-WP-SOCIAL-v3-1