Why Agents Prefer Shortcuts - And Why You Must Stop It
Ask an LLM to build a background job processor. You'll get a 400-line main.ts with the queue logic, the worker, the retry mechanism, and the logging — all in one function. No interfaces. No tests. No separation of anything from anything.
It works.
And that's the problem.
The Path of Least Tokens
LLMs don't optimize for maintainability. They optimize for completion.
The fastest path from "write a job processor" to "here's your code" runs straight through a single file with no abstractions. And if you look at what these models trained on — Stack Overflow answers, tutorial blog posts, GitHub snippets — most of that code is intentionally simplified.
Nobody posts a 47-file enterprise architecture to answer "how do I process jobs in Node.js?"
So the model learned that the expected answer is the simple one.
The Three Cheats
The same three patterns show up regardless of model or provider.
Global state abuse. Instead of passing data through explicit interfaces, the AI creates a god-object — some giant context or state variable that everything reads from and writes to. Convenient. Works. Completely unmaintainable the moment two developers (or two agents) touch the same codebase.
Silent failure. Everything gets wrapped in try/catch blocks that swallow errors. Or worse — the error is caught, "something went wrong" is logged, and execution continues as if nothing happened.
Database connections fail. HTTP requests time out. Files don't exist.
The code just... proceeds.
Implicit dependencies. The code assumes process.env.DATABASE_URL exists. It assumes redis is installed. It assumes the file system is writable. None of these are checked, documented, or handled.
How to Fix This
Don't fight the model. Constrain it.
Interface-First Development
Define the contracts before the AI writes logic. TypeScript interfaces, API schemas, database models. Everything.
interface JobProcessor {
enqueue(job: Job): Promise<JobId>;
process(handler: JobHandler): Promise<void>;
retry(jobId: JobId, reason: string): Promise<void>;
getStatus(jobId: JobId): Promise<JobStatus>;
}
interface JobHandler {
(job: Job): Promise<JobResult>;
}
type JobStatus = 'pending' | 'processing' | 'completed' | 'failed' | 'retrying';
Now the AI's creativity is pinned to a rigid architectural spine. It can implement however it wants, but it must satisfy these interfaces.
The result is always more modular, more testable, and more maintainable.
The Multi-Step Build Loop
Don't ask the AI to "write the app."
That's how you get the 400-line main file.
Instead, break it into verified steps:
- Generate the schema/types. Verify.
- Generate the unit tests. Verify.
- Implement the logic to pass those tests. Verify.
- Add error handling and logging. Verify.
Each step is a separate prompt. Each step gets verified before the next one starts. Slower — but you end up with code that actually works in production.
The Bigger Point
The transition from "LLM wrapper" to "agentic system" isn't about a fancier framework.
It's about recognizing that the AI will always take shortcuts unless shortcuts are structurally impossible.
A wrapper passes strings back and forth. An agentic system manages state, handles retries, observes its own reasoning — and crucially — doesn't trust itself.
The best agentic architectures treat the LLM the same way you'd treat a brilliant but unreliable contractor. Great ideas. Impressive speed.
But you'd better review the work before it goes live.
How to cite
Pokhrel, N. (2026). "Why Agents Prefer Shortcuts - And Why You Must Stop It". Native Agents. https://nativeagents.dev/posts/limitations/agent-shortcuts-architecture