Not too long ago, your developer endpoint was a known quantity. Someone wrote code, pulled a few dependencies they’d vetted, ran builds against a CI pipeline, and pushed to a repo. The threat model was something most security teams could draw on a whiteboard.
That model is starting to crack.
GitHub Copilot, Claude Code, Cursor, and the rest of the AI coding assistant crowd have moved from novelty to default. The 2025 Stack Overflow Developer Survey found 84% of developers use or plan to use AI tools in their workflow, and 51% of professional developers are using them daily. That’s a meaningful shift in how code gets written, what packages get pulled, and what executes on developer machines. It’s also largely happening below the radar of how most organizations think about endpoint risk.
What’s Changed on the Endpoint
Traditional threat modeling for a developer workstation assumed a fairly bounded set of behaviors. You’d map out the IDE, the package manager, the version control client, maybe a few build tools. Risky executions came from known sources: a malicious dependency, a compromised maintainer account, a phishing link.
AI coding assistants have widened that surface in some ways that don’t get talked about enough.
For one, they pull code from a probabilistic model rather than a known repository. The output looks like the developer wrote it, but the suggestions are statistical reconstructions of patterns the model saw during training. There’s no chain of custody for an autocompleted function the way there is for a vendored library.
For another, agentic workflows like Claude Code and Cursor’s Agent mode are increasingly running commands directly. The assistant doesn’t just suggest pip install requests, it executes the install. It writes and runs scripts. It modifies files. The developer becomes a reviewer of actions taken on their behalf, and reviewers miss things.
Then there’s the package layer, which is where this stops being abstract.
Slopsquatting Is Real and Already Happening
When a coding assistant suggests a package that doesn’t exist, you get an install error. That’s harmless on its own, but attackers have already noticed which packages models like to invent and registered them on npm or PyPI with a payload attached. This type of attack is called slopsquatting.
A USENIX Security 2025 paper analyzing 576,000 code samples across 16 popular LLMs found that 5.2% of recommended packages from commercial models and 21.7% from open-source models were hallucinations. That’s hundreds of thousands of unique fictional package names. Not all of them get squatted, but attackers don’t need a high hit rate when they can monitor model outputs at scale and register the most repeatable hallucinations.
The catch with slopsquatting is that it bypasses most of the heuristics security teams have built up around supply chain attacks. A typosquatted package depends on a developer mistyping. A malicious dependency requires the attacker to compromise a real maintainer or trick a real user. Slopsquatting just requires the model to confidently suggest a name that sounds plausible.

Provenance Got Murky
Software bill of materials (SBOM) work over the last few years has been built on a simple premise: you should know what’s in your software. Where it came from, who maintains it, what version you’re running.
AI assistants complicate this in subtle ways. When a developer accepts a 40-line autocompletion that includes a function adapted from training data, what’s the provenance? It’s not a copy-paste from Stack Overflow you can grep your codebase for. It’s not a vendored dependency tracked in package.json. It’s just code, sitting in your repo, indistinguishable from anything else the developer wrote.
The same applies to package suggestions made conversationally. If a developer asks an assistant “what’s a good library for parsing this?” and the assistant suggests something, that suggestion didn’t come from your approved-vendor list, your internal package registry, or any review process. It came from a model’s best guess about what packages exist and which ones fit the question.
This isn’t a reason to ban the tools, which would be both impractical and counterproductive. But it does mean dependency review needs to extend to the conversational layer, not just the lockfile.
What Runs With Elevated Rights
The category that should worry security teams most is what the AI assistant ends up executing locally, and under whose privileges.
The August 2025 s1ngularity attack on the Nx build system is the cleanest illustration so far. Attackers compromised the popular npm package and embedded malware that, among other things, looked for locally installed AI command-line tools like Claude Code, Gemini, and Amazon Q. When it found them, it ran them with permissive flags such as –dangerously-skip-permissions and –yolo to weaponize the AI’s own filesystem access for credential theft. Over a thousand valid GitHub tokens, npm tokens, SSH keys, and cloud credentials were exfiltrated before GitHub could shut down the attacker repos.
That attack pattern is novel. The malware co-opted the developer’s AI tooling as an accomplice, And it worked partly because developers, in the interest of moving fast, often grant their AI assistants the same level of access they have themselves: full read/write on their home directory, ability to run arbitrary commands, access to their cloud credentials and source repositories.
If the developer has standing administrator rights on their endpoint, so does anything the AI runs on their behalf.
What This Means for How We Think About Developer Endpoints
The shift here is conceptual before it’s technical. Developer machines have always been a privileged perimeter, but the assumption was that the human at the keyboard was the agent making decisions. With AI assistants in the loop, decisions about what to install, what to execute, and what to read are increasingly distributed between human and model. The model is fast, confident, and wrong some non-trivial percentage of the time.
A few principles worth thinking about:
Treat AI-suggested packages as untrusted by default. Same way you’d treat a package suggested by a stranger on an internet forum. Verify they exist, check their publish date and maintainer history, look for the kinds of red flags that attackers leave when they’re rushing to capitalize on a hallucination they spotted.
Don’t run AI agents with more privilege than they need. This is just least privilege applied to a new actor. If the developer doesn’t need persistent admin rights to do their job, neither does the assistant operating on their behalf. Just-in-time elevation, scoped to specific tasks, contains the blast radius of a compromised package or a confused model.
Audit what the assistant actually did. Most coding assistants log their actions, but those logs rarely make it into the same audit trail as other endpoint activity. They should. If something goes wrong, you want to be able to reconstruct what got installed, what got executed, and what files got touched.
Pay attention to the package layer. Dependency provenance tools, lockfile review, and pre-commit checks aren’t new, but the volume and source of new dependencies coming into projects has changed. The processes that worked when developers manually added dependencies once a sprint may not hold up when an agent is adding them mid-conversation.

A Different Threat Model
The upshot isn’t that AI coding assistants are bad, or that organizations should ban them. The productivity gains are real, even if they’re more uneven than the marketing suggests. Stack Overflow’s 2025 numbers show developer trust in AI output has actually dropped as adoption rose, with only 29% of developers saying they trust AI accuracy, which is probably a healthier baseline than blanket enthusiasm.
The point is that the threat model your team built two years ago probably doesn’t include any of this. It assumes a developer who chooses dependencies deliberately, runs commands intentionally, and operates within a sandbox of their own making. None of those assumptions hold cleanly anymore.
Updating the model means accepting that the developer endpoint is now a hybrid environment: part human workspace, part AI execution surface, part pipeline into your production code. Securing it well means treating the AI layer with the same scrutiny you’d apply to any other privileged process running on a sensitive machine.
Quietly, that’s already a different threat model than most teams are working from. The sooner that changes, the smaller the surprise when the next s1ngularity-style campaign lands.

