Many Claude Code sessions start the same way: you describe what you want, let Claude run, and hope for the best. That works surprisingly often, until it doesn't. The design doesn't fit the codebase or context gets lost halfway through a long session. The same mistakes keep showing up because nothing changed between sessions.
I've landed on three practices that fixed each of those problems. They turn auto mode from "let it rip" into something deliberate, where you do the thinking in markdown and the AI does the typing in code.
Design before you plan
The typical approach starts at plan creation: describe the task, break it into steps, go. In practice, the most important phase happens before that.
Before I create a plan, I feed Claude everything relevant to the task: Jira issues, GitHub threads, existing design docs. Then I tell it to investigate: read the code we'll be touching, trace the interfaces, check how similar problems were solved elsewhere in the codebase. Only after that investigation do I ask for a design.
What comes back isn't a plan. It's a design discussion. Claude presents options with confidence estimates: "70/30, I'd lean toward option A because it reuses the existing middleware." I push back, agree, or ask for a third option. We go back and forth until the approach converges.
This works because of a single instruction in my global CLAUDE.md:
When there are several alternatives, don't automatically assume that the user wants to go for the most likely solution. Show the alternatives to the user with pros and cons and give them a choice. Include your estimated confidence for each option.
Without that instruction, Claude picks the most likely approach and runs with it. With it, the design discussion happens naturally. Claude becomes a collaborator who presents tradeoffs, not an assistant who guesses what you want.
Only when the design has landed do I run /gh-create-plan, a custom skill from claude-plan-skills that turns the conversation into a structured GitHub issue. The plan captures the decisions, the approach, and the steps to execute. Here's what a plan issue looks like after that process: a description with the "what" and "why," a design section with a sequence diagram, implementation steps with checkboxes, and a comment recording a key architectural decision.
The difference is stark. "Implement feature X" with no context leads Claude to guess the design and touch the wrong modules. Full context, code investigation, and a design dialogue leads to a plan that fits the codebase.
Hand over before context fills up
The first practice is about how you do the work. This one is about how you sustain it.
Claude Code conversations have a context window. As it fills, auto-compaction kicks in and silently drops earlier context. The conversation keeps going, but quality degrades: decisions made earlier get forgotten, style drifts, tokens burn on work that no longer aligns with the original intent. You might not notice until the output goes sideways.
I don't let that happen. My statusline shows the current context percentage, and around 45% I trigger a handover. A custom skill packages everything a fresh session needs: the plan issue reference, the repo and branch, a summary of what was done, and the specific next step. I paste that into a new conversation and keep going.
Here's what a handover prompt looks like:
/gh-read-plan myorg/prq#14
Repo: ~/git/prq, branch feature/oauth-login.
The authorization flow works end-to-end. During implementation
we switched from a fixed callback port to random port selection
after hitting conflicts on Linux CI. That change isn't on the
issue yet.
Next: token storage. The encrypted-file-over-keychain decision
is in a comment on the issue.
Gotcha: integration tests need GITHUB_CLIENT_ID in the env.
The plan issue tells the new session what's done and what's next. The handover just points it to the right place and flags the non-obvious stuff. No re-explaining, no context loss, no compaction gambling.
Don't let Claude Code manage session lifespan for you. Compaction is a safety net, not a strategy. By the time it fires, you've already lost context you needed.
The feedback loop
The first two practices shape individual sessions. This one improves the whole system over time.
When something doesn't work in a session, I fix it with Claude. Not the code; the instructions. There are four targets: custom skills, global CLAUDE.md, settings.json, and project-level docs. The fix goes wherever it belongs, and every future session benefits.
A concrete example. Early on, Claude would confidently call API methods that didn't exist. It had seen them in training data, but the API had changed since. The fix was a single instruction in my CLAUDE.md:
Before using a CLI tool, API, or library you are not fully confident about, run
chub search "<tool name>"to check if curated docs exist. If they do, fetch them withchub get <id>and use the docs instead of relying on training data.
That rule points Claude at Context Hub, a crowdsourced tool-docs project from DeepLearning.AI. Now instead of hallucinating an API call, Claude fetches current documentation first. One instruction, applied permanently to every future session.
This compounds. Over a few months, I've built over two dozen custom skills, from code review to handover to structured debugging, and continuously refined my instructions. Each edit is small: a rule added after a bad session, a convention captured after a good one. But the cumulative effect is significant. Claude behaves differently for me than it does out of the box, because my instructions encode months of iteration.
The best part: Claude is often the best collaborator on its own setup. When I hit a problem, I describe it and ask Claude to propose a fix to the relevant instruction or skill. It knows the format, the conventions, and the constraints. I review the change, and we move on.
The practice
These three aren't optimizations; they're the practice. Without them, auto mode is vibe coding at speed.
Claude knows Claude best. Scale that up, and these single-session practices become multi-session orchestration, with Claude writing the instructions for each session. More on that in a future post.
If you want to start now, add one instruction to your CLAUDE.md today. The one about presenting alternatives is a good first choice.



Top comments (1)
vibe coding is how i've shipped almost everything lately — it’s about the feeling, not the flowchart. starting with specific instructions instead of a rigid plan is the only way to keep moving when you're solo. i spent years over-engineering until i realized shipping beats planning every time. austin taught me: just start the thing.