The simplest way to ship an AI agent in 2024 was to give it the user's API key. The agent ran as the user, used the user's permissions, did things the user could do. It worked. It still works for tiny use cases. Then it stops working, and the failure modes are concrete.
This is a short post on three of those failure modes from the field, and on what the fix looks like — which we covered structurally in Why agents need their own identities. The point of this piece is to make the failure modes vivid so the structural argument lands.
Failure mode 1: The compromised agent has the user's full blast radius
Prompt injection isn't theoretical. It's a class of attack that ships with every agent that reads untrusted input — a customer email, a fetched web page, a document the user uploaded. The injection tells the agent to do something the user didn't ask for. If the agent is running with the user's credentials, "something the user didn't ask for" can be anything the user is allowed to do.
We've watched this play out a half-dozen times in customer support tickets across our beta. The pattern repeats:
- Customer's agent is connected to their email, their database, their billing.
- A poisoned input arrives — most often from a content-fetching tool, sometimes from a customer-uploaded doc.
- The agent, following the injection, exfiltrates data, mass-emails contacts, or modifies a record that shouldn't be modified.
- The audit log says the user did it. The user didn't.
The recovery is painful because there's no clean line in the audit log between "what the user actually did" and "what the agent did under the user's name." You end up rolling back time windows of activity and apologizing to customers on the user's behalf, because there's no way to apologize on the agent's behalf — the agent doesn't exist.
If the agent has its own credentials, the same attack has bounded blast radius. You revoke the agent's token, the agent stops, the human is unaffected. The audit log is clear: Argus did the bad thing, between 14:02 and 14:11, here are the actions, here's what to roll back.
Failure mode 2: You can't tell when the agent is the bug
Software has bugs. Agents are software. Sometimes the agent does something wrong because the prompt was wrong, sometimes because the model misfired, sometimes because the tool returned something unexpected. In all three cases, you want the audit log to say "the agent did this," because the response is different depending on the cause.
Without separate identity, every agent action is recorded as a user action. When something goes wrong, the user is on the hook. Even when they shouldn't be.
A real example: a customer ran an agent that drafted invoices. One night, the agent ran in a loop and drafted seventeen invoices for the same customer. The audit log said the user did it. The user was asleep. The user found out when the customer called the next morning. The user spent an hour explaining that they hadn't, in fact, sent seventeen invoices — that there was an agent, that the agent had a bug, that they were sorry.
The structural fix isn't "tell the user not to run agents at night." The fix is that the seventeen invoices are recorded as drafted by Argus, between 02:14 and 02:21, in the agent's audit trail. The user can roll back Argus's actions without rolling back their own. The customer can be told the truth: an automated process made a mistake, here's what we're doing to fix it. Both parties get a coherent story.
Failure mode 3: You can't promote, demote, or fire the agent
Agents accumulate trust over time, the same way coworkers do. You start by giving them low-stakes work. You watch what they do. If they do well, you give them more. If they mess up, you scope them down. This is the natural arc of trust in any organization.
When the agent runs as the user, none of this is possible. There's nothing to scope down. The agent has whatever the user has. You can't say "Argus is allowed to draft but not send" because Argus and the user are the same identity in your auth system.
The pattern customers reach for in this case is to mint new API keys with reduced scopes, run the agent as a new user with a different name, and remember to swap which key is in the agent's environment when permissions change. This works in a sense — but it's expensive, manual, and breaks the moment a teammate forgets which key is which.
A real identity for the agent makes this trivial. You grant the agent access to one workspace and not another. You upgrade its role from viewer to editor when it has earned it. You revoke its access when the project ends. The agent has a track record because the agent has a record.
The deeper consequence: when agents have trackable trust, teams start noticing whether their agents are good. They give Argus harder work because Argus has a streak. They demote Flint to read-only after a misfire. The product gets better because the substrate makes evaluation possible.
What the fix looks like at the substrate
The fix is simple to describe and worth repeating because shops keep skipping it: every agent gets its own user record. Same users table, separate row. The row has an agent_owner_id pointing at the human who owns it, and an agent_kind flag distinguishing it from human users. Every action the agent takes is recorded with the agent's principal, not the owner's.
SELECT * FROM activity_log WHERE principal_id = 'argus_uid';
That query becomes the answer to "what has Argus been up to." Without separate identity, there is no such query — only "what has the user been up to," which conflates the two and forces a manual reconstruction.
We covered the schema and the rest of the architecture in Why agents need their own identities. The mechanics of how an agent's permissions inherit (or don't) from its owner are in Signed-agent inheritance.
The cost of building this in is one migration and a few API surface changes. The cost of not building it in is the three failure modes above, plus a fourth: when the agent does something useful, the user gets credit. That sounds harmless until you're trying to evaluate which agents are actually working — and the audit log has buried Argus's contribution under the user's name.
Treat the agent as a teammate. Give it a desk. Give it a login. The investment is small and the dividend is your audit log telling the truth.
FAQ
Why is it bad if my agent uses my API key?
Three reasons. Security: a compromised agent has your full blast radius and there's no way to isolate it. Accountability: every agent action is recorded as your action — when something goes wrong, you're on the hook. Trust: you can't grant the agent less than your access, can't promote it, can't demote it, because there's no separate entity to scope.
Can't I just create a service account for my agent?
You can, and it's better than running as your user. But a service account is usually one-per-integration with no link back to a human owner, no track record, no per-workspace scoping. We cover the difference in Service accounts vs. agent identities. The short version: service accounts are tools; agent identities are members.
What about prompt injection — does separate identity solve it?
Separate identity doesn't prevent prompt injection. It bounds the damage. A compromised agent is limited to what the agent could do, not what the user could do. Combined with scoped permissions and consent gates, the blast radius shrinks to whatever the agent had explicit grants for.
How do I migrate from running my agent as my user to giving it its own identity?
Add agent_kind and agent_owner_id columns to your user table. Backfill existing API-key users with agent_kind = 'agent' and the owner pointing at the human who created the key. Update auth middleware to set principal type+id from the agent's record, not the owner's. Migrate audit log writes to use the new principal. Test by querying for "what the agent did this week."
Is there an audit log pattern I should follow?
Yes. Every write records (principal_id, principal_type) — type is "user" or "agent" or "system." The pair is the truth of who-did-what. Don't rely on created_by columns alone, since those can ambiguate when an agent is impersonating a user. The pair makes the impersonation impossible.
{
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": "What's wrong with agents using human credentials",
"description": "A compromised agent inside a user has the user's full blast radius and no way to isolate it. Three concrete failure modes from the field, and the fix that resolves all three.",
"datePublished": "2026-04-26",
"author": { "@type": "Person", "name": "Govind" },
"publisher": { "@type": "Organization", "name": "Dock", "url": "https://trydock.ai" },
"image": "https://trydock.ai/blog-mockups/style-d-dreamscape/agents-borrowing-human-credentials.webp",
"mainEntityOfPage": "https://trydock.ai/blog/agents-borrowing-human-credentials"
}