The Simplest Feedback Loop
Make a skill. Use it. Reflect on it. Repeat.
The fastest way to improve your AI workflow isn’t a better model, a bigger context window, or a fancier orchestration pattern. It’s a three-step loop that takes five minutes to set up.
Make a skill. Use it. Reflect on it.
Memory Zero
Every new Claude Code session starts blank. No memory of what you corrected last time. You tell the agent your conventions, it drifts, you correct it, the session ends, the corrections vanish. Next morning, same dance.
CLAUDE.md helps but it’s blunt: global context dumped into every conversation regardless of task. Teams try to solve this with massive instruction files that cover everything from commit formatting to deployment patterns. Nobody reads them. Nobody maintains them.
The real problem isn’t memory. It’s that corrections don’t compound.
The Loop
A skill is a markdown file the agent loads on demand. Not global context. Not a project-wide instruction set. Scoped, task-shaped guidance that activates when you need it. I have a /review skill. First version was maybe 20 lines: check for TypeScript any, flag missing error handling, verify test coverage. The bare minimum of what I look for in a PR. Five minutes to write. Wrong in a dozen ways I wouldn’t discover until I used it.
So I used it. The agent followed the skill and I corrected where it was wrong. “No, I don’t care about that import order, check the data flow.” “Actually, that’s fine for an internal utility. Only enforce strict types at boundaries.” “Instead of flagging every missing test, focus on untested error paths.”
Those corrections are the signal. They’re the gap between what you wrote down and what you actually care about. You don’t know your own preferences until you see them violated.
Then: /reflect.
It reads the session, finds every correction and confirmed pattern, classifies them by confidence, and proposes updates to the skill file. You approve or reject. The skill updates. Thirty seconds. Next session starts at a higher baseline.
Here’s an abbreviated version of the reflect skill itself:
# Reflect
Analyze session history for learnings. Persist to skills and memory.
## Process
1. Scan session for corrections, successes, patterns
2. Classify by confidence:
- High: explicit corrections ("actually", "no,", "instead", "don't")
- Medium: confirmed approaches ("perfect", tests passing first run)
- Low: context (file paths, tool names, conventions)
3. Propose skill edits. Wait for approval before writing.
## Destination
- If in a repo with skills → update skill in-place
- Otherwise → prompt for project or global skills directory
That’s it. The whole mechanism is a markdown file that reads your session and proposes edits to other markdown files. No infrastructure. No database. No pipeline. Markdown all the way down.
Compound Interest
First session: messy. The skill flags everything with equal weight. You spend more time dismissing noise than reading the review.
Fifth session: it knows you care about error paths more than import order. It checks data flow at boundaries, not cosmetic consistency. It stops flagging things you’ve waved through three times already.
Tenth session: the review reads like yours. It catches what you’d catch, in the order you’d catch it. Your sensibilities, codified. Not because you sat down and wrote a comprehensive style guide. Because you kept reviewing code and the corrections stuck.
That’s what separates a skill from a static instruction file. The instruction file is a snapshot of what you thought mattered when you wrote it. The skill converges on what actually matters through use.
The Smallest Harness
I’ve written about harness engineering as a maturity ladder, from raw model calls up to self-verifying infrastructure. A skill is the smallest unit on that ladder. It has opinions about a workflow. It constrains the agent. And with /reflect, it has a built-in improvement mechanism that pushes it up naturally: vague instructions become specific constraints become verifiable criteria. You don’t plan the evolution. It happens because you keep correcting and the corrections persist.
For teams, this is the lowest-friction entry point I’ve found. Not “adopt my orchestration framework.” Just: write down how you do one thing. Use it. Improve it. The workflow stays personal. The improvement is structural.
Pick something you do repeatedly. Code review. Deployment checks. Data validation. Write the basics in a markdown file. Use it. Correct it. Run /reflect. The worst thing that happens is the skill doesn’t work yet. That’s the point. It will.


