As businesses rush to integrate AI into everything from help desks to backend systems, a new type of digital worker is quietly taking shape: autonomous AI agents. These tools aren’t just smart assistants — they’re fully independent actors, capable of executing tasks, making decisions, and in some cases, accessing sensitive systems without human oversight.

And that’s exactly what has cybersecurity experts on edge.

At this year’s RSA Conference in San Francisco, the buzz wasn’t just about ransomware or supply chain hacks. It was about AI agents — and the very real security risks that come with handing the keys to the network over to something that doesn’t sleep, doesn’t forget, and doesn’t need permission every time it moves.

“This is not hypothetical,” said Jason Clinton, Chief Information Security Officer at AI firm Anthropic. “We’re already seeing organizations deploy these agents into production environments. The problem is, most aren’t thinking about the identity layer — and that’s a disaster waiting to happen” (Axios).

The concern isn’t that AI agents are malicious. It’s that they’re powerful — and unmonitored. They move fast, operate at machine speed, and if misconfigured or compromised, can rip through systems before human admins even know something’s wrong.

Vendors like 1Password, Okta, and OwnID are now racing to adapt identity and access management tools to this new reality. The challenge? Traditional IAM systems were built around humans — who log in, take breaks, and eventually log out. AI agents, by contrast, are always on, and often spun up by other machines without a clear human in the loop (Axios).

“This is a paradigm shift,” said Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), during a panel at the conference. “We need to treat AI agents as first-class digital citizens — with credentials, guardrails, and the same accountability we expect of a human employee.”

But that’s easier said than done.

A major concern among security teams is lateral movement — the idea that an AI agent compromised in one system could be used to move through the network, escalate privileges, or exfiltrate data. Even worse, because many of these agents operate behind the scenes, their behavior can be difficult to log, much less detect in real time.

“Imagine giving a junior intern global admin access,” said one cybersecurity researcher, who asked not to be named. “Now imagine that intern doesn’t need to sleep, can launch 10,000 API calls in a minute, and never makes a typo. That’s the risk we’re dealing with.”

In response, some security leaders are calling for the development of “AI kill switches” — emergency stop mechanisms that can halt or isolate rogue agents before damage is done. Others are pushing for built-in rate limiting, behavioral baselines, and stricter credentialing for AI accounts.

But while the technology is sprinting ahead, the policy conversation is still playing catch-up.

No major regulatory framework yet addresses the specific risks of autonomous agents. And while NIST and the EU’s AI Act touch on AI accountability, neither has fully grappled with what it means to secure a bot that behaves like a user but moves like a worm.

For now, organizations experimenting with autonomous agents are being urged to start small, limit their permissions, and implement strict audit logs. Think of it as the digital equivalent of letting a new hire shadow someone for a few weeks — before giving them access to payroll.

“Just because an AI can act on its own doesn’t mean it should,” Clinton added. “Treat it like you would any powerful tool. Keep it on a leash until you know you can trust it.”

As businesses flirt with fully autonomous AI, one thing is clear: cybersecurity teams can’t afford to wait for the first major breach. The agents are already here — and they’re not asking for permission.

Photo by Owen Beard on Unsplash