/loop — How I Turned Claude Code Into an Autonomous Agent
Table of Contents
Claude Code is a great assistant. You write a prompt, you get a response. You tell it “fix this,” it fixes it. But it’s still ping-pong — you serve, it returns.
What if you want it to work on its own? Not as a chatbot, but as an agent — one that looks at what needs doing, plans a solution, implements it, and cleans up after itself?
That’s exactly what I wanted. And that’s exactly what I built. (More about me and my work with AI.)
What is /loop
/loop is a built-in Claude Code command. The syntax is dead simple:
/loop [interval] promptIt takes your prompt and re-runs it on a timer. Simple examples:
/loop 5m run npm test and fix any failures
/loop 10m check build status and reportBy itself, this isn’t revolutionary. Repeated prompt execution is just a loop. But here’s the key insight: the prompt can be a custom skill. And a skill can be an entire autonomous workflow.
/loop 20m /improve-gitlabThat single line starts an agent that runs all night and has merge requests waiting for your review in the morning.
Evolution: From features.md to GitLab Issues
I didn’t jump straight to GitLab. The first version was much simpler — and its limitations taught me what I actually needed.
V1: /improve-jiridolejs
The first iteration used a features.md file with checkboxes:
## Need Approval
- [ ] Add dark mode toggle
- [ ] Optimize homepage images
## Approved
- [x] Fix mobile menu
- [x] Add structured data for blogThe agent read the file, picked up the first approved task, and implemented it. Everything went into a single agent-features branch. Research ran through Playwright personas — the agent would “dress up” as a mobile user, accessibility auditor, or SEO expert and crawl the site.
It worked. But it had problems:
- No code review. Changes went straight into code without human eyes.
- Single branch for everything. Merge conflicts. No way to cherry-pick a single fix.
- Hard to track. What did the agent do? When? Why? Answers lived in the git log, but that’s not a workflow.
V2: /improve-gitlab
The second version moved everything to GitLab Issues. Every task is an issue with labels:
- Needs Approval — agent proposed it, waiting for a human
- Planned — agent wrote an implementation plan
- Approved — human approved it, agent can implement
Each issue gets its own branch and merge request. Humans review MRs like any other code.
Why does this work better? Traceability. I can see the full history — from proposal through plan to implementation. Parallel work. Each issue is isolated. Review before merge. No code hits production without human eyes.
Anatomy of a Single Cycle
This is the core of the entire system. Every 20 minutes, one /improve-gitlab cycle runs. Here’s what happens.
Pre-flight check
Before the agent spends a single token on LLM work, it makes one API call to GitLab. It checks three things:
- Are there approved issues to implement?
- Are there issues waiting for approval?
- Are there new human comments?
If all three answers are “no” — the agent stops. No work, no tokens, no cost. This pattern is critical. Without it, the agent would burn through your API budget every 20 minutes analyzing a codebase with nothing to do.
Housekeeping (temp file cleanup, branch pruning) runs at most once every 24 hours — no reason to tidy up on every cycle.
Bootstrap
If the pre-flight check finds work, the agent prepares:
- Switches to
mainand pulls latest changes - Reads
agent-knowledge.md— persistent memory between cycles - Verifies dev server, Playwright, and
glab(GitLab CLI) are running
Orient — The Decision Engine
The agent looks at all issues and decides what has the highest priority:
- Approved — implement an approved task (always first)
- Plan Refinement — rework a plan based on human comments
- Planning — write an implementation plan for a new task
- Housekeeping — repository maintenance
- Research — crawl the site and find improvements
- Content Research — evaluate potential topics for new content
Approved always comes first. The logic is simple: someone took the time to review an issue, approve it, and write comments. That’s a signal it matters.
Planning
When the agent finds an issue without a plan, it analyzes the codebase and writes an implementation plan:
- Which files it will change
- What approach it will take
- What the success criteria are
The plan gets appended as a comment to the issue, and the label moves to Planned. I read it, maybe leave a comment (“not this way, try that instead”), and when it looks right, I move it to Approved.
Development
This is where the main work happens. The agent takes an approved issue and:
- Creates a branch
issue/<IID>-<slug> - Spawns an implementation subagent with a clear brief
- After implementation, runs a quality gate — build must pass, Playwright takes screenshots
- Pushes the branch, creates a merge request with “Closes #IID”
“You open GitLab in the morning and there are fresh merge requests waiting. Your job is to review, not to code.
”
Quick-win issues (weight 1) go before more complex ones (weight 2, 3). The logic: small changes merge fast, big ones deserve more attention.
Research
The research phase uses perspective rotation. The agent takes on different personas:
- First-time visitor — is it clear what this site does? Where do I click?
- Mobile user — does the layout work? Are tap targets big enough?
- Accessibility auditor — contrast ratios, alt text, keyboard navigation?
- SEO expert — meta tags, structured data, page speed?
Each persona crawls the site through Playwright (desktop and mobile viewports). Findings automatically become new issues with the Needs Approval label. I then decide what’s worth implementing.
Housekeeping
Maintenance that would otherwise eat half an hour a week:
- Delete temp files and build artifacts
- Prune merged and stale branches
- Update the knowledge base (what it learned, what changed)
- Close stale issues
Content Research
The most interesting phase. The agent convenes a virtual C-level panel — CEO, CTO, CFO, CHRO, and COO personas that evaluate potential blog topics. Each persona judges from their role’s perspective: the CEO asks “is this strategic?”, the CFO asks “what’s the ROI?”, the CTO asks “is this technically relevant?”
If 2+ personas approve a topic, the agent creates a content issue. This surfaces topics I wouldn’t think of on my own — because I see the site through a developer’s eyes, not through the eyes of an HR director.
What It Looks Like in Practice
My typical day with /improve-gitlab:
Evening: I check issues in GitLab. Approve what makes sense. Comment on plans where something’s off. Fire up /loop 20m /improve-gitlab.
Morning: Open GitLab. Fresh merge requests, each with a description of what changed and why. I review the diff, test on preview, merge. Done.
During the day: An issue shows up with “Needs Approval” — the agent found during its nightly research that the mobile menu has a contrast problem. I check the screenshot, approve it, and the agent fixes it in the next cycle.
My role has shifted. Instead of “developer who occasionally reviews,” I’m “reviewer who occasionally steers.” That’s a fundamental shift in productivity.
Patterns You Can Steal
You don’t need to build a full /improve-gitlab. /loop is useful with simple prompts too:
CI babysitter
/loop 5m check if build passed, if not analyze the failure and fix itThe agent monitors your CI pipeline and automatically fixes failing builds.
Continuous testing
/loop 10m run the full test suite, fix any regressionsLet the agent watch your tests while you write new features on a different branch.
Site audit
/loop 30m audit the site for accessibility issues and create gitlab issuesRegular accessibility audits that nobody would otherwise do.
Doc freshness
/loop 1h compare documentation against actual code and flag driftDocumentation that never falls out of sync with reality.
When NOT to Use /loop
/loop isn’t free. Every cycle costs API tokens. Without a pre-flight check, you’ll spend more overnight than a junior developer costs per day.
Don’t use /loop when:
- The task needs human judgment at every step. If you have to approve every change,
/loopwon’t help — it just adds overhead. - You don’t have guardrails. An agent without a quality gate (build checks, tests, screenshots) can generate tech debt faster than you can pay it down.
- You don’t have a clear scope. “Improve the site” is too vague. “Find and fix accessibility issues on the homepage” is the right granularity.
The pre-flight check pattern matters so much for this reason. A cheap check at the start saves an expensive LLM cycle when there’s nothing to do.
Conclusion
/loop is the bridge between assistant and agent. The command itself is simple — it repeats a prompt over time. But the real unlock comes when you combine it with a well-designed skill.
My /improve-gitlab is one example. Your skill could solve something completely different — monitoring, testing, report generation. The principle is the same: define a cycle, add a pre-flight check, set up a quality gate, and let the agent work.
If you’re curious about other tools I’ve built for Claude Code, check out 5 Tools I Built for Claude Code.
And if you want a quick reference for commands and techniques — grab the free AI Cheat Sheet for Developers.
You might also like
Free Claude Code cheat sheet
Commands, prompts, plugins and workflows from €3,000/day workshops. Download free.
Get the cheat sheet →