Duplicate » admin by request

How Agentic AI is Creating Security Holes in Your Organization

main

Agentic AI tools are showing up in workplaces whether IT departments approve them or not. Employees use them to automate repetitive tasks, generate code, analyze data, and handle workflows that used to require manual intervention. The productivity gains are real, but so are the security risks most organizations haven’t thought through yet.

These aren’t the simple chatbots that answer questions and stop there. Agentic AI can execute code, read files across your network, make API calls to internal systems, and modify configurations based on natural language instructions.

When you give an AI agent access to your infrastructure, you’re essentially creating another user with privileges, except this one doesn’t understand security policies and can be manipulated in ways human users can’t.

Your Data is Already Leaving Your Network

Developers paste proprietary code into AI assistants for debugging help. Analysts upload customer data for processing. Employees share internal documents for summarization. Most of this data gets processed on external servers that your security team has no visibility into.

The AI platforms themselves vary widely in how they handle information. Some companies train their models on user inputs unless you specifically opt out, while others store conversation histories indefinitely. A few offer enterprise plans with better data handling, but most employees aren’t using those. They’re using free or personal accounts because they’re faster to access and don’t require approval.

Around 71% of workers use unauthorized AI tools at work, which means it’s more likely than not that someone in your organization is doing the same.

» admin by request

When AI Agents Run Commands with Your Privileges

An attacker doesn’t need to compromise your systems directly, they can just embed hidden instructions in a document that an AI agent processes. The agent follows these instructions while appearing to handle the employee’s legitimate request. The employee sees helpful output and runs the generated commands without realizing they’re malicious.

Here’s how a typical attack unfolds:

Step 1: Injection
Attacker embeds malicious instructions in a document, email, or website that looks legitimate.

Step 2: Processing
Employee asks their AI agent to summarize the document or help with related work. The agent reads the hidden instructions.

Step 3: Execution
AI agent generates commands that follow the attacker’s instructions. Employee trusts the output and runs it.

Step 4: Compromise
Malicious commands execute with the employee’s privileges, potentially with admin rights.

Research from organizations like Microsoft’s Security Response Center has demonstrated prompt injections that exfiltrate data, modify system settings, and execute unauthorized commands. The attacks work because AI agents can’t reliably distinguish between instructions from trusted users and instructions hidden in untrusted content.

Admin By Request’s EPM solution helps here by removing standing admin rights and requiring approval for privilege elevation. When an employee wants to run an AI-generated script that needs elevated permissions, the request goes through a workflow that creates visibility and an audit trail. Someone can review what’s about to execute before it runs with admin rights.

Credentials End Up in Unexpected Places

AI agents often need authentication to do useful work. An agent that interacts with your cloud infrastructure needs API keys. One that modifies databases needs connection strings with passwords, while one that sends emails needs SMTP credentials.

Employees provide these credentials to make the AI agent functional. Those secrets end up stored in:

  • Chat histories that sync across devices
  • The AI platform’s memory systems
  • Configuration files in cloud storage
  • Prompts when employees ask for troubleshooting help

These credentials rarely get rotated because nobody thinks of the AI agent as a place where secrets are stored. An attacker who compromises an employee’s AI account doesn’t just get access to that account, they get access to every system that employee authenticated the AI agent with.

Shadow AI Moves Faster Than Shadow IT

IT departments can spend ages dealing with shadow IT, but Shadow AI moves even faster because AI tools require minimal setup and deliver immediate value. An employee can start using an AI agent in under a minute without installing anything or requesting access.

Your security team can’t monitor what they don’t know exists. Traditional network monitoring might catch some AI traffic, but many agents use standard HTTPS connections that look like regular web browsing. Data loss prevention tools might flag obvious data transfers, but they struggle with the contextual, conversational nature of AI interactions.

Writing a policy about acceptable AI use won’t stop employees from using unauthorized tools when those tools make their jobs easier. You need technical controls that work regardless of which AI service someone chooses.

» admin by request

What You Can Do About It

The typical response to new security risks is writing policies and hoping employees follow them. With AI agents, that approach fails because the tools are too useful and too easy to access. You need technical controls that limit damage even when employees use unauthorized AI tools.

1. Remove standing admin rights

Implement just-in-time privilege elevation with approval workflows. AI-generated commands can’t automatically run with elevated privileges, even if an employee trusts the output without reviewing it. Every privileged action requires explicit approval and gets logged.

2. Segment your network

Your most sensitive systems, databases, and file shares should require additional authentication that can’t be automated through an AI agent’s session. An AI agent working through an employee’s context shouldn’t be able to reach everything that employee technically has access to.

3. Monitor privileged activity

Track when elevated privileges are used and what processes run during those sessions. If an employee is constantly requesting elevation to run scripts, that pattern should be visible to your security team for review.

4. Treat AI-generated code like untrusted input

Implement code review requirements that treat AI-generated code the same way you’d treat code from someone who doesn’t understand your security standards. Static analysis tools can catch common vulnerabilities before that code reaches production.

5. Rotate credentials that might be exposed

If employees are using AI agents to interact with internal systems, assume those credentials could be compromised through the AI platform. Regular rotation limits the window of exposure if an AI account gets breached.

This Isn’t Getting Simpler

Agentic AI capabilities are expanding quickly. Today’s agents can read files and execute scripts. Future versions may have persistent access to your systems, make autonomous decisions about infrastructure changes, and coordinate with other AI agents to complete complex workflows.

Organizations that wait to address these security issues will find themselves playing catch-up after employees have already established insecure patterns that are difficult to reverse. The solution isn’t banning AI tools (employees will use them anyway, they’ll just hide it better). Build security controls that account for AI as another type of user that needs managed access, monitoring, and restrictions based on what it can do to your systems.

Controlling privilege escalation protects your systems whether the threat comes from AI agents, malware, or compromised accounts. Admin By Request EPM implements the just-in-time elevation and approval workflows that prevent unauthorized commands from running with admin rights. Try our free plan for up to 25 endpoints, or book a demo to walk through how it works.

About the Author:

Picture of Pocholo Legaspi

Pocholo Legaspi

Pocholo Legaspi is a seasoned content marketer and SEO specialist with over nine years of experience crafting digital content that drives engagement and growth. With a background in tech and a Master’s in Business Informatics, he brings a data-driven approach to content strategy and storytelling.

Share this blog to your channels:

Lifetime Free Plan for 25 Endpoints,
No Strings Attached.

Fill out the form to create your account and get started.

Book a Demo

Orange admin by request circle tick logo. » admin by request