DEV Community

Cover image for Corporate Amnesia: What Happens When Your Team Forgets How Its Own Code Works
Gavin Cettolo
Gavin Cettolo

Posted on

Corporate Amnesia: What Happens When Your Team Forgets How Its Own Code Works

Erosion of team trust and developer speed

A few months ago, we had to change a small piece of logic.

It sounded simple.
A minor tweak.
“Shouldn’t take more than a couple of hours.”

Then someone asked:

“Do we know how this actually works?”

Silence.

No one on the team had written that part.
The last person who touched it had left.
There was no documentation.
Tests were… optimistic.

What should have been a small change turned into a half-day investigation.

Not because the problem was hard.

Because the knowledge was gone.


TL;DR

  • Teams don’t just lose code quality over time, they lose understanding.
  • Documentation alone doesn’t solve it, readable and maintainable code does.
  • Preventing knowledge loss is a business problem, not just a technical one.

Table of Contents


What Is Corporate Amnesia

Corporate amnesia happens when a team loses the knowledge needed to confidently work on its own system.

It doesn’t happen all at once.

It’s gradual.

A developer leaves.
Another switches teams.
A feature is built quickly and never revisited.

Over time, fewer and fewer people understand how things actually work.

Eventually, you end up with a codebase that:

  • technically works
  • but no one fully understands

And that’s where things start slowing down.


How Knowledge Actually Gets Lost

It’s easy to blame turnover.

But that’s only part of the story.

Knowledge gets lost when:

  • decisions are not documented
  • code is hard to read or follow
  • logic is duplicated instead of centralized
  • context lives in people’s heads, not in the system

And most importantly:

👉 when teams optimize for delivery, but not for clarity

Shipping fast is important.

But if clarity doesn’t follow, knowledge decays.


The Real Cost (It’s Not Just Time)

At first, the cost looks like time.

  • tasks take longer
  • debugging becomes slower
  • onboarding is harder

But the real cost is deeper.

You start seeing:

  • hesitation to change existing code
  • over-engineering “just to be safe”
  • duplicated features instead of modifying existing ones

And eventually:

👉 the team stops trusting the codebase

That’s when velocity drops.

Not because developers are slower.

But because the system is harder to work with.


Why Documentation Alone Doesn’t Save You

The instinctive solution is:

“We need better documentation.”

And yes, documentation helps.

But it has a problem.

It gets outdated.

Fast.

Code evolves.
Docs don’t always keep up.

And after a while, you’re not sure which one to trust.


Code as the Single Source of Truth

Good teams treat code as documentation.

Not by writing comments everywhere.

But by writing code that explains itself.

For example:

function calculateFinalPrice(price, discount, tax) {
  return price - discount + tax
}
Enter fullscreen mode Exit fullscreen mode

vs

function calculateFinalPrice(price, discount, tax) {
  const discountedPrice = applyDiscount(price, discount)
  const finalPrice = applyTax(discountedPrice, tax)

  return finalPrice
}
Enter fullscreen mode Exit fullscreen mode

Both work.

Only one communicates intent.


Readable code does a few important things:

  • reduces the need for external documentation
  • makes onboarding faster
  • preserves knowledge inside the system

Because the goal isn’t to explain the code somewhere else.

It’s to make the code explain itself.


A Real Example: “It Works, Don’t Touch It”

Every codebase has one.

A file or module that everyone avoids.

You open it and see something like this:

if (type === "A") {
  // complex logic
} else if (type === "B") {
  // slightly different logic
} else if (type === "C") {
  // copied and modified logic
}
Enter fullscreen mode Exit fullscreen mode

No clear structure.
No clear intent.
No clear ownership.

And someone eventually says:

“It works. Let’s not touch it.”

That’s not stability.

That’s fear.

And fear is one of the clearest signals of lost knowledge.


How to Build a Memory-Resilient Codebase

You can’t prevent people from leaving.

But you can prevent knowledge from leaving with them.

A few practices make a big difference:


Make decisions visible

Not everything belongs in code.

But key decisions should be easy to find.

  • why something was built a certain way
  • trade-offs that were made
  • known limitations

Even lightweight notes help.


Optimize for readability

Readable code is not a luxury.

It’s a form of knowledge preservation.

  • clear naming
  • small functions
  • explicit logic

These are not style choices.

They’re long-term investments.


Prefer structure over cleverness

Clever code is hard to remember.

Simple, structured code is easier to rebuild mentally.

And that’s what you want.


Share ownership

If only one person understands a part of the system, that’s a risk.

Code reviews, pair programming, and rotations help distribute knowledge.


Refactor before it’s too late

The longer you wait, the more knowledge you lose.

Refactoring is not just about code quality.

It’s about keeping understanding alive.


Knowledge Handover Checklist

Most teams don’t lose knowledge suddenly.

They lose it during transitions.

Someone leaves.
Someone changes role.
Someone moves to another project.

And the handover?

Often rushed.
Sometimes skipped.

If you want to avoid that, you don’t need a long process.

You need a simple, repeatable checklist.


Before someone leaves (or switches context)

Make sure these things are covered:


1. Critical areas are identified

  • What parts of the system does this person know best?
  • Which components would be risky without them?

👉 If you don’t know this, that’s already a signal.


2. Key flows are explained, not just code

  • How does the system behave end-to-end?
  • What are the important business rules?

Code shows how.
You also need why.


3. Known pitfalls are documented

  • edge cases
  • fragile areas
  • “this breaks if you change X”

These are things you rarely find in code, but always matter.


4. Decisions and trade-offs are captured

  • why this approach was chosen
  • what alternatives were considered
  • what constraints existed

Without this, future changes become guesswork.


5. Ownership is reassigned clearly

  • who is responsible for this part now?
  • who reviews changes?

Ambiguity here leads to neglect.


6. A real walkthrough happens

Not just docs.
Not just links.

👉 A real session:

  • open the code
  • walk through real scenarios
  • answer questions live

This is where most knowledge transfer actually happens.


Small rule that makes a big difference

Don’t treat handover as a one-time event.

Treat it as a continuous habit:

  • share context during development
  • explain decisions in PRs
  • avoid knowledge silos early

Because the best handover is the one you barely need.


Final Thoughts

Corporate amnesia doesn’t happen because teams are careless.

It happens because knowledge isn’t treated as something that needs to be preserved.

Codebases don’t just store logic.

They store decisions, context, and understanding.

And when that disappears, everything becomes harder.

The goal isn’t to remember everything.

It’s to make sure the system remembers for you.


If this resonated with you:

  • Leave a ❤️ reaction
  • Drop a 🦄 unicorn
  • Share a part of your codebase that nobody wants to touch

And if you enjoy this kind of content, follow me here on DEV for more.

Top comments (23)

Collapse
 
elenchen profile image
Elen Chen • Edited

Really enjoyed this piece!
“Corporate amnesia” is one of those problems everyone has felt, but few articulate this clearly.

What stood out to me is how closely this maps to the broader concept of organizational memory loss. It’s not just a code problem—it’s a systems problem. When knowledge lives primarily in people’s heads (or worse, in Slack threads and half-written docs), it becomes fragile by default. As research on corporate amnesia shows, a significant portion of operational know-how is tacit and walks out the door with team churn, forcing teams to constantly “re-learn” things they already solved.

I also think there’s an interesting tension here:

  • Over-documentation creates noise and entropy
  • Under-documentation creates dependency on individuals

Most teams oscillate between the two without ever designing a real “memory architecture.”

One idea I’ve seen work well is treating knowledge like code:

  • versioned
  • reviewable
  • tied to decisions (not just outcomes)

In other words, not just what the system does, but why it ended up that way. That “why” is usually the first thing to disappear—and the most expensive thing to rediscover.

Curious how you think AI tooling will affect this. A lot of teams are hoping it will solve knowledge gaps, but if anything, it seems like it might amplify them if the underlying context isn’t preserved.

Collapse
 
gavincettolo profile image
Gavin Cettolo

Thank you @elenchen for the comment!

That’s a thoughtful read of the piece, especially the framing of “memory architecture”. I like that because it shifts the conversation from how much we document to how the system retains meaning over time.

Your tension is exactly the trap: teams treat documentation as volume instead of structure. More pages ≠ more memory. Less documentation ≠ more clarity. Without intentional design, both paths decay, just in different ways.

The “knowledge as code” analogy is spot on, but I’d push it one step further: most teams adopt the mechanics (versioning, PRs, etc.) without adopting the intent. Code review works because it encodes context, trade-offs, and constraints at the moment decisions are made. Documentation often happens after the fact, when that context is already diluted. So the real shift isn’t just tooling, it’s timing and proximity to decision-making.

On AI, I share your skepticism. It’s not that it will fail, it’s that it will faithfully reflect whatever memory system you already have.

If your organization has:

  • fragmented, inconsistent, or outdated knowledge
  • missing “why” behind decisions
  • reliance on tribal knowledge

AI doesn’t fix that. It accelerates access to it. Which can actually increase confidence in incomplete or incorrect understanding.

In that sense, AI can amplify corporate amnesia in two ways:

  • False coherence: it synthesizes clean answers from messy inputs, hiding gaps
  • Context erosion: people rely on generated summaries instead of engaging with source decisions

But there’s a more interesting upside if teams are deliberate.

AI becomes powerful when:

  • decision context is captured close to the source (e.g., ADRs, annotated PRs)
  • knowledge is structured and linked (not just stored)
  • there’s a clear boundary between “source of truth” and “generated interpretation”

Then AI shifts from being a crutch to being a memory interface, something that helps navigate and recombine knowledge, rather than invent it.

So I don’t think AI solves corporate amnesia. But it raises the stakes:
teams with weak memory systems will feel the pain faster,
teams with strong ones will compound their advantage.

If anything, AI forces a more uncomfortable question:
are we building systems that remember, or just systems that appear to know?

Collapse
 
elenchen profile image
Elen Chen

This is such a sharp distinction—especially the idea of AI as a “memory interface” rather than a source of truth.

Your point about timing resonates a lot. Most documentation is indeed written after the decision, almost like a post-rationalization. By then, the real constraints, debates, and uncertainties are already gone.

It makes me wonder whether the real failure isn’t documentation at all, but decision capture.

We’re very good at capturing artifacts (code, tickets, docs), but very bad at capturing reasoning in motion.

Maybe the deeper issue is that reasoning is expensive to externalize, so teams default to shortcuts.

Do you think this is solvable culturally, or does it require structural changes in how teams work (e.g., enforced ADRs, decision logs, etc.)?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

I think you're right to separate documentation from decision capture, they’re often conflated, but they solve different problems.

  • Documentation explains what exists.
  • Decision capture explains why it exists. And when systems degrade, it's almost always the second one that's missing.

On whether it’s cultural or structural, I’d say culture alone isn’t enough. Teams can agree that capturing reasoning is valuable, but under delivery pressure, it’s the first thing to be dropped.

So it has to be embedded into the workflow itself.

The key is reducing the “activation energy” required to capture decisions. If it feels like extra work, it won’t happen consistently.

That’s why I like lightweight approaches:

  • ADRs that are short and pragmatic, not essays
  • PR templates that prompt for “why” and trade-offs
  • Decision logs that are append-only rather than curated

The goal isn’t completeness, it’s continuity.

You don’t need perfect memory, you need enough breadcrumbs to reconstruct intent later.

Without that, teams aren’t really scaling knowledge, they’re just scaling output.

Thread Thread
 
elenchen profile image
Elen Chen

“Enough breadcrumbs to reconstruct intent” is a great way to frame it.

It also highlights something subtle: memory systems don’t need to be exhaustive, they need to be navigable.

I’ve seen teams try to solve this with massive documentation hubs, but they become graveyards because retrieval is harder than writing.

Which brings us back to AI again—ironically, retrieval is the part AI is very good at.

But that creates an interesting dependency:
AI is only as good as the structure and signals it can retrieve from.

So maybe the real design challenge isn’t just capturing decisions, but making them queryable.

Do you think we should start designing documentation explicitly for machine consumption, not just human readers?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Yes and I think that shift is already happening, even if implicitly.

Historically, documentation was written for humans reading linearly.
Now it’s increasingly consumed non-linearly by both humans (search-driven) and machines (embedding/retrieval-based).

That changes how we should think about structure.

Instead of long-form narratives, we need:

  • atomic pieces of knowledge
  • explicit linking between decisions, code, and context
  • consistent patterns that AI can recognize

In a way, it’s similar to how we moved from monoliths to modular systems in software.

The risk, though, is over-optimizing for machine readability and losing human clarity.

So the balance becomes:
structured enough for machines to navigate,
meaningful enough for humans to understand.

If you get that right, AI doesn’t just retrieve knowledge, it preserves the relationships between ideas.

And that’s where it starts to feel less like a tool and more like an extension of organizational memory.

Thread Thread
 
elenchen profile image
Elen Chen

That balance is tricky, but I like the analogy to modular systems—it makes the trade-offs clearer.

One thing I keep thinking about is ownership.

Code has clear ownership boundaries (teams, repos, reviewers).
But knowledge often doesn’t—it’s everyone’s responsibility, which usually means no one’s responsibility.

Do you think effective “memory systems” require explicit ownership, or should they emerge organically from team practices?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

I’d argue that ownership is necessary, but not in the traditional sense.

If you assign ownership the same way you do for code (“this person owns this doc”), you often end up with stale knowledge because ownership becomes a bottleneck.

But if there’s no ownership at all, entropy wins.

So the model that seems to work better is distributed ownership with local accountability.

For example:

  • Teams own the decisions they make (not a central documentation team)
  • Artifacts are updated closest to where changes happen (PRs, issues, ADRs)
  • Systems enforce touchpoints (e.g., “if you change X, revisit Y”)

Ownership becomes event-driven rather than static.

It’s less about “who owns this document” and more about “who is responsible when this context changes.”

That aligns better with how knowledge actually evolves, incrementally, and tied to change.

And interestingly, this is another place where AI could help:
not by owning knowledge, but by detecting drift.

Highlighting inconsistencies,
surfacing outdated assumptions,
pointing out where decisions no longer match reality.

That’s where it starts becoming a feedback loop, not just a retrieval layer.

Thread Thread
 
elenchen profile image
Elen Chen

Thank you @gavincettolo for this interesting exchange of opinions!

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Thank you too @elenchen

Collapse
 
printo_tom profile image
Printo Tom

This hits very close to home.

I’ve seen cases where a “small change” turns into hours of reverse engineering — not because the system is complex, but because the context is missing.

The point about trust really stood out. Once a team stops trusting the codebase, everything slows down — even simple decisions feel risky.

I also like the idea of code being the source of truth. Clean structure and intent go a long way in preserving knowledge without relying on docs that go stale.

Curious — have you seen teams successfully maintain this over time, or does it always decay eventually?

Collapse
 
gavincettolo profile image
Gavin Cettolo

This is exactly the kind of scenario that made me write the piece.

What you describe, hours of reverse engineering for a “small change”, is usually a symptom of missing decision context, not technical complexity. The system might be simple, but the reasoning behind it has evaporated.

And I completely agree on trust. Once that’s gone, the cost isn’t just time, it’s hesitation. Every change becomes a gamble, so teams either slow down or avoid touching things altogether. That’s when stagnation starts to look like stability.

On your question: I don’t think decay is inevitable, but it is the default.

Teams that manage to resist it tend to do a few things consistently:

  • they capture intent close to the moment decisions are made (not weeks later)
  • they treat code clarity as part of the system’s memory, not just “cleanliness”
  • they create small feedback loops (PRs, reviews, lightweight ADRs) that reinforce context over time

What’s interesting is that none of this requires perfect discipline, just consistency at the edges.
So it’s less about “can this be maintained forever?” and more about “does the system actively fight forgetting, or passively allow it?”

Most systems decay because they’re passive.
The good ones have just enough friction in the right places to make remembering the path of least resistance.

Collapse
 
printo_tom profile image
Printo Tom

That’s a really solid way to frame it — “missing decision context, not complexity.”
I think that’s exactly where most teams underestimate the problem.

The idea of systems being passive vs actively resisting forgetting really stands out. In my experience, most systems are passive by default, and by the time teams realize it, the cost is already showing up in slow delivery and hesitation.

Also agree on the “consistency at the edges” point — it’s rarely about big processes, more about small habits done repeatedly.

I’ve seen PR discussions actually become the best form of lightweight documentation when done well.

Curious — have you seen any teams use ADRs effectively without them becoming just another thing that gets ignored over time?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Great question @printo_tom, and, honestly, that’s where most ADR initiatives either work really well or quietly die.

Short answer: yes, I’ve seen teams use ADRs effectively, but only when they stop treating them as “documentation” and start treating them as part of decision-making itself.

Because the failure mode you’re hinting at is very real: ADRs become a graveyard the moment they’re written after decisions or stored somewhere detached from the workflow.

The teams where they actually work tend to share a few traits:
1. Written at decision time, not after: If you capture them while the trade-offs are still being debated, you get real context. If you write them later, you get a cleaned-up story.

And that difference is everything.
2. Lightweight by design: The moment ADRs feel like “writing a document”, adoption drops. The good ones are closer to a structured note than a report (1 page, clear context, decision, consequences).
3. Live with the code: If they’re in Confluence, they get ignored. If they’re in the repo, versioned and visible in PRs, they get used.
4. Triggered by change, not ceremony: The best teams don’t say “we should write ADRs”.
They say: “this decision is hard to reverse or affects multiple parts of the system → write one”.

What’s interesting is that ADRs don’t really fail because they’re a bad idea.
They fail because they violate the same principle we talked about earlier: they add friction instead of embedding into the flow.

When they work, they feel like a byproduct of good engineering hygiene.
When they don’t, they feel like compliance.

And that’s probably the litmus test:

  • If writing an ADR feels like stopping work, it will be ignored.
  • If it feels like finishing a decision, it sticks.

Also +1 on PR discussions, honestly, I’ve seen exactly the same thing.

Well-done PR threads are often the highest-fidelity decision records teams have.

ADRs work best when they don’t replace that, but distill the parts you’ll wish you had 6 months later.

Thread Thread
 
printo_tom profile image
Printo Tom

That framing is really helpful — especially the idea that ADRs should feel like finishing a decision, not documenting one after the fact.

I’ve seen exactly what you’re describing — when ADRs are written later, they almost become a “sanitized version” of reality, and you lose the actual trade-offs that mattered.

The point about keeping them close to the code also resonates. Anything outside the developer workflow tends to drift pretty quickly.

I also like how you connected it back to friction — it’s the same pattern everywhere. If something doesn’t fit naturally into how engineers already work, it just won’t stick long-term.

Feels like the real win is not ADRs themselves, but making decision-making visible while it’s happening.

Curious — have you seen any teams strike a good balance between ADRs and PR discussions without duplicating effort?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Yeah, that’s the key tension, if ADRs and PRs overlap, one becomes noise.

The best balance I’ve seen is:

  • PRs are for exploration,
  • ADRs are the decision snapshot. No duplication, just a short summary of the outcome, with a link back to the PR for full context.

Keeping ADRs lightweight and written before merge seems to make the difference. That way they stay useful without becoming overhead.

Feels like the real win is exactly what you said: making decisions visible while they’re happening, not after.

Collapse
 
gavincettolo profile image
Gavin Cettolo

One thing I’m really curious about after writing this: how many “black boxes” are you all carrying in your current codebase?

Not the obvious messy parts, I mean the ones that technically work, nobody touches anymore, and everyone is a bit afraid to break.

Have you ever had a moment where someone left (or you joined a new team) and suddenly a critical part of the system became… untouchable?

I’d love to hear real stories:

  • What’s the scariest “ghost of developers past” you’ve encountered?
  • Did your team manage to recover that knowledge, or are you still working around it?
  • And most importantly: what actually worked to reduce that risk?

Curious to see how different teams deal with this 👀

Collapse
 
paolozero profile image
Paolo Zero

Really enjoyed this — especially the framing of “corporate amnesia” as a business risk, not just a code quality issue.

The example you shared hits close to home: what should be a small change turning into an investigation is something most teams have experienced. And it perfectly illustrates that the real problem isn’t complexity, it’s lost context. As you point out, once knowledge leaves with people, teams are forced to “re-learn” their own systems — a pattern widely recognized as a major productivity drain in organizations.

One thing that stood out is your emphasis on readable code as memory. That’s a subtle but important shift from the usual “just write more docs” advice. Documentation often decays, but code that clearly expresses intent tends to age much better.

I’d maybe add one complementary angle: beyond readability and ownership, decision visibility (why something exists, not just how it works) is often the missing piece. Without that, even clean code can become misleading over time.

Overall, this is a great reminder that:
we’re not just maintaining systems — we’re maintaining understanding

Collapse
 
gavincettolo profile image
Gavin Cettolo

Really appreciate this, you captured the core tension perfectly: the problem isn’t complexity itself, it’s the disappearance of context.

I like how you framed “decision visibility.” That’s exactly the layer that tends to evaporate first. Even when code is clean and readable, it can still tell the wrong story if the original intent is missing. At that point, teams aren’t just reading code, they’re interpreting it, and interpretation is where time (and mistakes) creep in.

That’s part of why I lean on readability as a form of memory: it doesn’t just document what the system does, it constrains how much can be misunderstood later. But you’re right, it’s not sufficient on its own.

If I had to extend the idea, I’d say:

  • readable code preserves mechanics
  • decision visibility preserves intent

And long-term understanding depends on both.

Thanks @paolozero for adding that angle, it fits really naturally into the “corporate amnesia” framing.

Collapse
 
lucaferri profile image
Luca Ferri • Edited

I loved it!

The “2-hour fix” example turns into a half-day investigation hit close to home. That silent moment of “does anyone actually know how this works?” is probably one of the most expensive questions a team can ask—and it happens more often than people admit.

What stood out to me most is the idea that documentation alone isn’t enough. A lot of teams react to knowledge loss by writing more docs, but as you point out, those drift quickly. The deeper issue is that knowledge isn’t embedded in the system itself—so it decays as people leave or context shifts.

I’d add that there’s also a cultural angle here:
teams often reward shipping fast over making things understandable. Over time, that creates a kind of “local optimization trap” where each change makes sense in isolation, but the system as a whole becomes opaque.

One thing I’ve seen help is treating clarity as a deliverable:

  • PRs that explain why, not just what
  • lightweight architecture decision records (ADRs)
  • regular “re-explanation” sessions where someone walks through a part of the system as if it were new

It’s interesting how this connects to the broader idea of organizational memory—when knowledge lives mostly in people’s heads, it literally “walks out the door” with turnover.

Curious: have you seen teams successfully balance speed and clarity without slowing delivery too much? That seems to be the hardest trade-off in practice.

Collapse
 
gavincettolo profile image
Gavin Cettolo

This is such a good expansion of the idea, especially the “local optimization trap.” That’s exactly how these systems drift into opacity: every change is reasonable in isolation, but the aggregate becomes harder and harder to reason about.

I also really like your framing of clarity as a deliverable. That’s a mindset shift more than a process change, and it’s probably where most teams struggle. Speed is visible and rewarded immediately; clarity is invisible until something breaks.

On your question about balancing speed and clarity, I’ve seen it work, but only when teams stop treating it as a trade-off.

The teams that get closest tend to:

  • bake explanation into the workflow (PRs that justify decisions, like you mentioned)
  • favor small, incremental changes (easier to keep understandable)
  • treat “WTFs per minute” as a real signal during reviews
  • accept that a tiny upfront cost in clarity prevents large downstream costs in rediscovery

So it’s less “slow down to be clear” and more “optimize for not having to rediscover things later.”

The hard part is that the payoff is delayed, while the pressure to ship is immediate, which is why this often ends up being a cultural decision, not just a technical one.

Really thoughtful comment @lucaferri and I’m glad the example resonated (even if for slightly painful reasons).

Collapse
 
cyberdaemon profile image
Cyber Daemon

I feel personally attacked by this article, Gavin. 😅 I came to Dev.to to procrastinate, not to have a mirror aggressively held up to my team's 4,000-line utils/do_not_delete.js file! You call it "Corporate Amnesia," I call it "Job Security through Obscurity." If I'm the only dev who knows why a load-bearing console.log ('here 3') keeps the auth database from catching fire, they literally cannot fire me. It's just math.

That "It works, don't touch it" section triggered my fight-or-flight. We all have that one chunk of legacy code written by a guy named, "Jared," who drank six Red Bulls, refused to write comments, and then vanished to become a goat sim farmer. We don't refactor Jared's code. We pray to it. We offer it sacrifices in the form of disabled ESLint warnings.

I'm officially stealing your handover checklist. Will anyone actually fill it out before they rage-quit? Absolutely not. But I'll feel significantly better pointing at it while the servers burn. Fantastic read, Gavin!

Collapse
 
gavincettolo profile image
Gavin Cettolo

I respect the honesty, “job security through obscurity” might be the most accurate anti-pattern I didn’t include. 😄😄😄

The scary part is that it works… right up until it doesn’t. The moment you’re unavailable, everything that looked like control turns into fragility.

Also, every team has a “Jared.”

Not a person anymore, but a force of nature embedded in the codebase:

  • slightly magical
  • completely undocumented
  • and somehow critical to business continuity

The “load-bearing console.log” is real. I’ve seen versions of that which people were genuinely afraid to delete, not because of what it does, but because of what might happen if it stops doing it.

And your point about the checklist is fair, most processes fail not because they’re wrong, but because they rely on the worst possible moment (handover, burnout, deadlines) to be followed properly.

That’s why I think the checklist only works if it’s not a one-time thing.
If it becomes something lightweight and continuous (captured in PRs, small notes, decisions as they happen) it stops being a chore and starts being a byproduct.

Otherwise yeah… it just becomes another well-intentioned document we point at while everything burns. 🔥