- The Midas Report
- Posts
- Who Do You Trust When the Hacker Is the Agent?
Who Do You Trust When the Hacker Is the Agent?
4 min read.

Organizations are entering an era where machines shoulder more responsibilities, but the security infrastructure isn’t keeping pace. As AI agents become capable of executing complex tasks autonomously, attackers are inventing novel ways to hijack, impersonate, or manipulate these non human workloads. Traditional defenses, centered around human user authentication, are rapidly losing relevance in defending against threats that exploit agentic systems.
Agentic Architectures and the Rise of Non Human Identities
Unlike conventional automation tools, AI agents act with autonomy. They navigate workflows, make decisions, and carry out actions with little human supervision. Their identities are not tied to usernames and passwords but to API keys, tokens, and service principals, credentials that are often out of sight, out of mind. This gives attackers a new attack surface: steal or spoof an agent’s credentials, and you become the agent in the eyes of the system. A Palo Alto Networks Unit 42 report underscores this risk, “identity spoofing and impersonation”, where attackers pose as legitimate agents, can grant them unauthorized access to tools and data.
These AI agents require a new identity domain alongside human and traditional machine users, as they make decisions and access data without oversight. The traditional IAM (Identity and Access Management) frameworks, built for static machine accounts or employee roles, simply don’t fit. Academic proposals are emerging to redefine IAM for these agents, for example, frameworks incorporating decentralized identifiers, verifiable credentials, and zero trust principles tailored to autonomous workflows.
Hijacking, Prompt Injection, and Silent Attacks
New research paints a sobering picture of the threat landscape. At Black Hat USA 2025, Zenity Labs revealed “zero click” exploit methods, meaning attackers can silently hijack AI agents to exfiltrate data, manipulate workflows, or impersonate users without any user interaction. These exploits include memory persistence, enabling long term control over agent behavior eSecurity Planet.
Prompt injection represents another sharp edge vulnerability. Here, attackers embed malicious instructions into an agent’s input stream, through documents, web pages, or emails, and the agent executes them thinking they’re legitimate. OWASP now ranks prompt injection as the top security risk for LLM based systems.
Furthermore, adversaries are weaponizing AI infrastructures themselves. According to CrowdStrike, attackers are targeting agentic AI systems directly, stealing credentials, deploying malware, and compromising agent workflows, drastically expanding enterprise vulnerability surfaces.
Shadow AI and the Governance Gap
Compounding the risk is the prevalence of “Shadow AI”, unsupervised and unmanaged agentic tools creeping into enterprise environments. In the UK, TechRadarPro reports that machine identities already outnumber human users 100 to 1 in many organizations, and few companies have governance frameworks in place to manage them.
Okta’s latest research further highlights the oversight problem: although interest in identity security is growing, only 10% of organizations have mature strategies for non human identity governance. Most don’t manage agentic workloads with the same rigor reserved for employees.
Toward Secure, Agent Aware Identity Practices
The path forward is clear: identity security must be reimagined to include AI agents as first class entities. That means embedding agent identities into governance policies, enforcing least privilege access, monitoring credential usage, and maintaining audit logs of agent actions. Strategies like zero trust, centralized governance, secure by design, and policy enforcement must extend across the entire agent lifecycle.
Advanced frameworks, like Cisco’s AGNTCY Agent Identity, are being developed to assign, verify, and track credentials for AI agents specifically. Academic research, such as the novel zero trust framework leveraging decentralized identifiers and verifiable credentials, offers a promising blueprint for scalable agentic identity governance.
Why It Matters
As AI systems take on more responsibility, the security perimeter shifts from humans to autonomous agents. Without robust identity governance, your next breach may come from within, from agents themselves. The same speed and autonomy that make them powerful also make them vulnerable.
Organizations must recognize: identity is no longer just for people. Agent workforces need identity strategy too.
Sources
https://www.techradar.com/pro/security/weaponized-ai-is-making-hackers-faster-more-aggressive-and-more-successful
https://www.techradar.com/pro/from-crawlers-to-ai-agents-why-untangling-the-new-ai-powered-web-takes-an-intent-based-approach
https://www.esecurityplanet.com/news/ai-agents-vulnerable-silent-hijacking/
https://www.okta.com/identity-101/agentic-ai-security-threats/
https://stytch.com/blog/ai-agent-fraud