Skip to content

Why I Never Read the First Plan from Claude Code

· 7 min read

Hey,

I have a rule that surprises people: I never read the first plan that Claude Code generates.

Not because it’s bad. It looks great. So great that you want to approve it immediately and let the AI get to work. And that’s exactly the problem.


The Problem with the First Plan

When Claude Code gets a task, it generates a plan. It breaks down the problem into steps, suggests which files to modify, which functions to call, what order to follow. It sounds reasonable. It looks professional. And your brain says: “Yeah, that checks out.”

But the plan almost always has holes. Specifically:

It references things that don’t exist. The AI invents a function name that sounds logical but isn’t in the codebase. Or it assumes an API endpoint that nobody ever wrote. The plan is internally consistent — but it doesn’t match project reality.

It ignores project conventions. Every project has its own style. File naming, component structure, error handling patterns. The AI might not pick up on these, even with full codebase access. It does things “correctly” in general, but wrong for your specific project.

It misses edge cases. The first plan handles the happy path. What if the API returns 429? What if the user submits an empty form? What if the database is down? These rarely appear in the first plan.

It doesn’t address security. The plan adds a new endpoint but doesn’t mention authentication. It stores data but skips sanitization. Not because the AI doesn’t know what those are — but because it focuses on what you explicitly asked for.

The first plan from AI is like a first draft. You never publish a first draft. So why would you approve a plan without revision?


Why We Approve It Anyway

Here’s the uncomfortable part: we approve these plans because they look good. It’s a cognitive bias — when someone (or something) presents a structured, confident proposal, your brain switches from “critical analysis” mode to “validation” mode. It’s the same mechanism that creates cognitive debt — velocity without understanding.

You read the plan looking for reasons it’s right. Not reasons it’s wrong.

You’d do the same with a human colleague — but you know your colleagues, you know where they make mistakes, you have context. With AI, that’s missing. AI writes with absolute confidence whether the plan is perfect or completely off.


My Solution: /replan

I wrote a Claude Code plugin that handles this for me. It’s called claude-replan and it works simply:

  1. I give a task. Claude Code generates a plan.
  2. Instead of reading the plan, I type /replan.
  3. The plugin dispatches parallel subagents that examine the plan from different angles.
  4. I get back a revised plan with specific fixes.

Each subagent looks at the plan differently:

  • Codebase alignment — do the functions and modules referenced in the plan actually exist? Does it match the current project structure?
  • Feasibility — can these steps actually be executed? Are there dependencies that would break things?
  • Security — is authentication, validation, or sanitization missing?
  • Fresh perspective — is this even the best approach, or is there a simpler solution?

These agents run in parallel, so the review takes seconds, not minutes.

What It Looks Like in Practice

# You give a task in Claude Code
> Add an endpoint to export user data as CSV

# Claude Code generates a plan...
# Instead of approving:
> /replan

# The plugin dispatches agents that review the plan
# A few seconds later, you get a revised plan with fixes:
# - "UserService.exportData() doesn't exist, use UserRepository.findAll()"
# - "Missing rate limiting on the new endpoint"
# - "CSV generation should be async — for 10k+ users it'll block the event loop"

The result: a plan I can trust. Not because it’s from AI — but because it went through critical review from multiple angles.

The plugin is open source: claude-replan on GitHub.


What If You Don’t Want a Plugin

Totally fair — not everyone wants to install another tool. But the principle works without it too. After the plan is generated, just manually ask a few questions:

  1. “Do all the functions and modules referenced in this plan actually exist?” — This catches invented APIs.
  2. “What project conventions does this plan violate?” — Claude Code knows the codebase, but it might not notice conventions until you ask.
  3. “What happens when [edge case]?” — Fill in a specific scenario: empty input, timeout, race condition.
  4. “Is there a simpler way to do this?” — The first plan is usually the most obvious approach, not the best one.

This takes an extra minute. And it saves you an hour of debugging when it turns out the plan relied on a function that doesn’t exist.


Why This Matters

This isn’t just about planning errors. It’s about how you work with AI.

If you approve the first plan without review, you’re treating AI as an oracle. You trust it because it sounds convincing. And that’s a dangerous habit — because AI will sound convincing even when it’s wrong.

If you review the plan — whether manually or through the plugin — you’re treating AI as a collaborator. Someone proposes, someone reviews. Same principle as code review with humans. Without that, you end up with workslop — code that looks done but nobody truly understands.

AI that nobody reviews isn’t a tool. It’s a technical debt generator.

I use /replan on every non-trivial task. Not because I don’t trust the AI. But because I know how first drafts work — whether they’re written by a human or a machine.


Summary

  • The first plan from AI looks good but has holes — invented functions, missing edge cases, violated conventions.
  • We approve it too easily — cognitive bias pushes us toward validation instead of critique.
  • The fix: multi-angle review — either manually (questions) or automatically via claude-replan.
  • The principle matters more than the tool — even without the plugin, you can review the plan. The key is not approving the first version.

Try it. Next time Claude Code offers a plan, don’t approve it. Ask: “What’s wrong with this?” The answer will surprise you.

— Jirka


You might also like

Share

Free Claude Code cheat sheet

Commands, prompts, plugins and workflows from €3,000/day workshops. Download free.

Get the cheat sheet →

Related posts

Your Team Generates Code Nobody Reads: The Problem Called Workslop

58% of workers spend 3+ hours/week fixing AI output. 24.7% of AI-generated code has security flaws. How to spot workslop and what to do about it.

6 min read

Also about: code review

/loop — How I Turned Claude Code Into an Autonomous Agent

One terminal command turns an AI assistant into an agent that plans, implements, and cleans up. A detailed walkthrough of my /improve-gitlab setup.

10 min read

Also about: Claude Code, productivity

How to Prepare for an AI Workshop (And Get the Most Out of It)

Practical checklist for teams before an AI workshop. What to install, what to prepare, what to expect.

7 min read

Also about: Claude Code, productivity