Code review is where a lot of engineering time silently disappears. You open a PR, context-switch from whatever you were doing, try to hold the whole diff in your head, and write comments that are either too vague to be useful or so detailed they take 20 minutes to type. AI can compress a big chunk of that — but only if you're deliberate about how you use it.
Here's the exact workflow I follow before and during code reviews, with the prompts I actually copy-paste.
Step 1: Dump the diff into context
Before touching anything else, I pull the raw diff and paste it into my AI tool of choice (I use a mix of Claude and GPT-4o depending on context length).
Don't summarize it yourself first. Give it the raw diff and let the model build its own mental model. Summarizing before you prompt it biases the output toward your existing assumptions — which defeats half the point.
git diff main..feature-branch > /tmp/review.diff
Then paste the contents directly into the chat window. If it's a big PR, I split it by directory or logical chunk.
Step 2: Ask for a structured first-pass summary
The first prompt isn't about finding bugs. It's about orienting yourself fast so you can review intentionally instead of just reading line by line hoping something jumps out.
Copy-paste this:
You are a senior software engineer doing a code review. Here is a diff:
[PASTE DIFF HERE]
Give me:
1. A 3-sentence summary of what this change is doing
2. The top 3 areas I should focus my review attention on (e.g. error handling, concurrency, data validation)
3. Any immediate red flags (missing tests, hardcoded values, obvious logic gaps)
Be direct. Skip praise.
The "be direct, skip praise" line matters more than it sounds. Without it, you'll get a paragraph of "This looks like a well-structured change!" before anything useful.
Step 3: Probe for edge cases in specific functions
Once you've done a first human pass, go back to the functions that felt risky or complex. This is where AI earns its keep — it's fast at enumerating inputs you didn't think about.
Copy-paste this:
Here is a function from the PR:
[PASTE FUNCTION]
List every edge case and failure mode you can identify. For each one:
- Describe what input or condition triggers it
- Describe what goes wrong
- Suggest a one-line fix or mitigation
Assume the codebase is a production backend service with real users.
The "production backend with real users" framing shifts the output from academic to practical. You'll get things like "if this returns null and the caller doesn't check, you'll get a 500 on checkout" rather than "consider handling undefined."
Step 4: Draft your review comments
This is the step most people skip, but it's the highest-leverage one. Instead of writing comments from scratch, paste the issues you found and ask for concise, non-condescending phrasing.
I want to leave a code review comment about this issue: [describe issue].
Write 2 versions — one for a junior dev, one for a senior dev.
Keep both under 3 sentences. Be direct but collaborative in tone.
This alone has meaningfully improved my review relationships. It's easy to sound harsh when you're in a hurry. Having two options lets you pick the one that fits the relationship.
Step 5: Final sanity check before approving
Before I hit "Approve," I do one last prompt:
Given everything we've discussed about this diff, what is the single most important thing I should verify manually before approving this PR?
This forces prioritization. You can't manually verify everything, but you can verify the one thing that matters most.
A few things this workflow is not
- It's not a replacement for reading the code yourself. The AI misses things, especially around business logic it has no context for.
- It's not for rubber-stamping. If the summary doesn't match what you expected the PR to do, that's a signal to dig in harder, not approve faster.
- It's not magic. The quality of output is directly proportional to how specific your prompts are.
The biggest productivity unlock isn't any single prompt — it's having a repeatable process so you're not reinventing the approach on every review.
If you want more prompts like these across the full engineering workflow — planning, debugging, writing tickets, and more — I put together a prompt playbook I've been refining for a while: check it out here.
Top comments (0)