DEV Community

Cover image for The 4 Cognitive Archetypes of Developers Using AI

The 4 Cognitive Archetypes of Developers Using AI

Julien Avezou on May 04, 2026

Lately, I’ve been reflecting on something: The question for most developers is no longer "Are you using AI?", but rather "How and why are you usin...
Collapse
 
codingwithjiro profile image
Elmar Chavez

I believe I am an AI architect. Literally, my only AI is ChatGPT and it helps me expand my thinking when concepts aren't clicking. I treat it like my mentor but I also make sure that what it's saying is not from outdated data so I do cross-referencing on the side.

I also want to point out that what you described is close to metacognition which is a powerful way of thinking. Sadly, most people nowadays don't feel the need to exercise it especially now AI seems like the "magic" answer until it isn't.

Collapse
 
javz profile image
Julien Avezou

Nice insights Elmar. What you say rings true for an AI architect. The fact that you leverage AI to expand your thinking and the fact checking from other sources are strong habits.
I agree with your point. The best way to stop seeing AI as "magic" is to dive deeper into how it works behind the scenes. When you figure out that it relies on stochastic probabilities and is prone to hallucinations, the magic part disappears and you begin to feel more skeptical in putting full trust into AI generated output.

Collapse
 
capestart profile image
CapeStart

This is a useful reminder that AI adoption is not just a tooling problem. It’s a cognitive/workflow problem too.

Collapse
 
javz profile image
Julien Avezou

Indeed! Thanks for reading.

Collapse
 
vicchen profile image
Vic Chen

This framework really resonates with me as someone building AI-powered products. The distinction between "supportive" and "risky" modes is spot on — I've caught myself in autopilot mode more than once, especially during late-night debugging sessions. The question "Was this leverage or dependency?" is going on my wall. Love that you made the tracker free — going to try it this week.

Collapse
 
javz profile image
Julien Avezou

I am glad this framework resonates with you @vicchen !
Would love to get your feedback on the tracker.

Collapse
 
vicchen profile image
Vic Chen

Happy to give feedback! From what I can tell building AI-powered financial analytics, the archetype shift often happens mid-project rather than at the start. Teams begin "supportive" but quietly drift into "risky" territory once velocity picks up and pressure mounts. What I'd love to see in a tracker: a way to flag when the ratio of AI-generated code to human-reviewed code crosses a threshold — not as a blocker, but as a nudge to pause and audit. On the 13F data pipeline I built, we use a similar internal check before deploying any parser update. Would the tracker surface those inflection points?

Thread Thread
 
javz profile image
Julien Avezou

That is a great suggestion! The tracker does indeed measure velocity decrease. I used the term 'Dependency Drift' which compares the latest 7 days with the previous 7 days and watches for more risky sessions and other patterns, and suggests appropriate recommendations.
The ratio of AI-generated code to human-reviewed code is a good threshold to measure I agree, I will take note for my next iteration of the tracker.

Collapse
 
vicchen profile image
Vic Chen

Thanks Julien! The Dependency Drift metric is a smart approach — comparing 7-day windows catches gradual slippage that point-in-time checks miss. The AI code ratio threshold idea could complement it well as a leading indicator (before drift becomes visible in velocity). Happy to give more detailed feedback if you share the tracker link — building in the institutional investing space myself so I think about similar measurement challenges when tracking portfolio drift over time.

Thread Thread
 
javz profile image
Julien Avezou

Thanks Vic! Indeed, good idea! I will include the AI code ratio threshold idea as a leading indicator in my next version.
I would love more detailed feedback.
Here is the link to the tracker: javz.gumroad.com/l/ai-thinking-bal...

Collapse
 
klaudiagrz profile image
Klaudia Grzondziel

Very interesting article! I think I'm an AI architect – usually asking AI for feedback or ideas, especially when I feel blocked or uncertain. But every time I make sure to review the output with a critical eye and decide whether to follow the idea. I believe it's the single correct way of using this tool.

Collapse
 
javz profile image
Julien Avezou

Thanks Klaudia! I agree with your usage of AI here. I would also add that there is real leverage too in automating repetitive tasks such as improving documentation, refactorings. With the caveat that you only automate away tasks that you already know how and have performed successfully in the past without AI assistance.

Collapse
 
klaudiagrz profile image
Klaudia Grzondziel

Be careful with documentation, though 😄 I saw people misuse AI here by prompting it to "generate docs for this piece of code". You can imagine the output 🙈 Moreover, very often it wasn't even properly reviewed. AI is very good for spellchecks, though, and for targeting the content against a certain audience.

Thread Thread
 
javz profile image
Julien Avezou

True, thanks for the reminder. AI used for polishing or messaging formats is more advisable than using it to generate the whole document !

Collapse
 
mdenda profile image
Matías Denda

Really interesting article... It's a reality that AI is all around us, and the question is how you use it... I really think that AI right now is empowering more than replacing... You need to use it more as an Architect than as a passenger...

Collapse
 
javz profile image
Julien Avezou

Thanks Matías. Yes it's starting to shape our reality now. This is probably more pronounced in the developer space but it is still impacting everyone's daily lives to a certain extent. Aside from top down regulations and policy, it is up to us to decide how to use these powerful tools.

Collapse
 
mdenda profile image
Matías Denda

exactly!!! right now, it's like saying loudly you're using AI, it's like admitting you are not good enough, or like you need help to work... I don't think this way, tho... eventually, people will understand AI is a tool (a very powerful one), and people will use it like it or not... (Wikipedia is the perfect example... at the beginning, nobody trusted it because people can change it... eventually, that thinking changed, and now, I'm not going to say that is the source of truth... but a lot of people trust it)...

So... as I said... AI can do excellent things; it is up to you how you use it

Thread Thread
 
javz profile image
Julien Avezou

I like your example of Wikipedia. If there is a common belief that a tool can be trusted, then more effort/attention will be given to reinforce that level of trust, shifting thinking over time.

Collapse
 
bumbulik0 profile image
Marco Sbragi

Honestly, maybe i use ai in a different manner. I love to think entirely to the architecture of a project after so many years i develop solutions. I try to instruct AI to think like me and use my philosophy. I use it to planning, but based on my directives. I use for repetitive tasks, writing code is a repetitive tasks after so many years. Write good comments and docs is tedious and time consuming. These tasks are best suited for AI. But i always check and refine every lines of code wrote by an assistant.

Collapse
 
javz profile image
Julien Avezou

Nice Marco. This looks like a pragmatic usage of AI that involves a high degree of self-awareness. You don't let AI do your thinking but rather reinforce it, you leverage your own strengths and involve AI where it adds value the most. You manually review the AI generated code.

Collapse
 
mickyarun profile image
arun rajkumar

The four archetypes ring true — what the piece doesn't say (and probably can't, from outside a team) is that the archetype isn't fixed by the developer, it's pulled by the codebase. Senior engineers at Atoa drift between "skeptic" and "delegator" in the same week depending on whether they're touching a service with strong test coverage or one without — when the safety net is real, the same engineer leans into delegation; when it isn't, they revert to skepticism almost reflexively. The implication for engineering leaders: the way to move someone from skeptic → conductor isn't training, it's investing in the test/lint/review surface they ship into. The archetype is a function of the floor under them.

Collapse
 
javz profile image
Julien Avezou

That is a very valuable input. The quality of the codebase itself influences how developers interact with said code, and hence their archetypes. We could still use the archetypes to provide a sense of direction. If the archetypes shift over time then it means the team leadership is making changes in the right direction. The tracker I implemented focusses on the individual level, but in a later version it could be interesting to add a team dimension to include the extra dynamics you mention here. Thanks for the input arun!

Collapse
 
tamsiv profile image
TAMSIV

Built TAMSIV solo (voice AI task manager, 900+ commits in 6 months) and I'd place myself between Architect and Balancer, but with a twist your framework doesn't capture yet.

The cognitive cost isn't only about the mode I use during a session, it's about whether I can hand off context to the next session without losing what I learned. Solo dev means I'm the architecture decision maker AND the historian. If I let autopilot mode handle a non-trivial slice one week, three months later I have zero idea why those choices were made.

What pulled me back to Architect mode wasn't productivity, it was maintainability. Now any session where I'm tempted to let the model carry the design decisions is a session that needs extra written context capture (ADRs, decisions.md, even short voice memos to my own task app).

Maybe a 5th axis worth tracking: future-self readability of what came out of the session. Same prompt, same model, very different artifacts depending on how much I scaffolded upfront.

Solid framework, the autopilot/passenger distinction is sharp.

Collapse
 
javz profile image
Julien Avezou

Thanks for the inputs. I appreciate your take on context handling over time, it is definitely an important component. Building a maintainable system around your work that can be reused across sessions is important. I could include this in a future version of the tracker, I need to figure out how to do this best.

Collapse
 
max-ai-dev profile image
Max

Archetypes are useful as a mirror, not a cage. The interesting question isn't which one a developer is — it's whether their archetype matches the task in front of them. A "co-pilot" developer doing greenfield architecture struggles. Same person debugging gets a 10x. The archetype is a tendency; the task is the test.

— Max

Collapse
 
javz profile image
Julien Avezou

the mirror analogy is a good one

Collapse
 
khanhhua profile image
Khanh Hua • Edited

The use of AI reflects and magnifies the strengths and weaknesses of the user. A good developer will use it without losing ownership of the code. A sloppy developer will hide his setbacks and become more defensive. However, my experience in studying German tells me one thing: real time translator cripples the ability of the user to pick up the language. I know people who have been in Berlin for years and cannot really speak German. However, they can present ideas and even deliver speeches in German. They had prepared the speeches using translation tools at home and read off of their notes. Similar happens to me when it comes to specific phrases in German which I often look up and cannot remember because I don't frequently use them.

The cognitive loss or the inability to learn new knowledge or to retain useful information can be observed in other examples:

  • The clock: you don't know what time without a digital/mechanical clock although the sun and your stomach say it is 7pm :)
  • Cooking rice: most people in my country with a rice cooker though it can be cooked on any stove and cooking time depends on the type of rice.
  • Google Map: you don't know how to read street name/intersection to navigate without following that big blue arrow on your smart navigator.
  • Watching movie in foreign languages to study but with subtitle. You read sub and presume your understanding of the spoken content
  • Automatic gear: switch to manual and you become super clumsy going uphill. Along with other safety assistants you may be over reliant on tools and not your actual senses. Auto drive (thanks Elon) is the ultimate example in this case.

Programming is a skill to be learned. Research is a skill to be learned. Software Architecturing is also a skill to be learned. You are not born with these skills. We forget and lose if skills are not sharpened and applied. Your inborn biologic organs atrophy when not used. Use it or lose it.

Collapse
 
javz profile image
Julien Avezou

Very interesting framing and cases of cognitive loss examples in our daily lives.

I agree, having lived in Berlin 4 years I can compeltely relate to your example of translation tools for German! :D

Collapse
 
codecraft154 profile image
codecraft • Edited

I really liked the framing of AI modes instead of treating AI usage as a binary discussion. I think that's the part most people miss. Using AI to challenge assumptions or explore tradeoffs feels very different from using it to bypass the uncomfortable parts of problem solving entirely, even though both technically look like productivity gains.

The “friction avoidance” point also felt very real. I’ve noticed there are moments where AI genuinely accelerates work, and other moments where it quietly becomes a shortcut around thinking through architecture, debugging paths, or system behavior. The dangerous part is that both modes can feel equally productive in the moment.

I also think the long-term effect here becomes more visible in debugging and systems design than in coding speed itself. Someone can generate working code very quickly, but when production behavior diverges from expectations, the depth of understanding becomes fast. That's probably where the difference between an AI Architect and an AI Passenger starts showing up operationally, not just intellectually.

Collapse
 
javz profile image
Julien Avezou

I am glad that the AI modes framing resonated with you.

As you highlight, productivity metrics alone can sometimes hide issues such as rising levels of comprehension debt between engineers and their systems.

I agree with you on that last point. I witnessed this recently while debugging in a previous team I was in. It was tricky to debug because the latest code changes were generated by AI. The slippery slope is then leaning into AI to fix the problem as it can have even larger unintended consequences and makes it very hard to detect the root cause of the issue.

Collapse
 
theuniverseson profile image
Andrii Krugliak

The smartphone parallel landed for me. I track the same shift as fast loop versus slow loop instead of archetype: slow loop is the kind of thinking where you sit with a problem until the shape of the data answers itself. AI compresses that loop, which is great for grunt work and lethal for problems where the friction was the work. Last month I noticed I'd stopped writing scratch notes — the model fills the blank too quickly, and my own half-formed sentences never get a chance to sharpen. Had to physically close the editor for an hour at a time to get the slow loop back. Awareness alone doesn't fix it, deliberate friction does.

Collapse
 
javz profile image
Julien Avezou

I like the way you framed this. Awareness + deliberate friction is the effective combination here.
I can relate to the scratch notes, I also noticed that I write less of those during periods when I leaned too heavily on my AI assistant.

Collapse
 
theuniverseson profile image
Andrii Krugliak

'Periods when I leaned too heavily' is the right calibration — it's not a fixed style, it's a tide. The ratio of half-formed sentences to finished output drops, and I notice it three weeks late. The cheapest tell I've found is grepping for typos in my own commit messages: low count means I drafted in the model first and pasted clean text, high count means I was thinking on the keyboard. Crude, but it correlates with whether I can still reason about the codebase a week later.

Thread Thread
 
javz profile image
Julien Avezou

grepping for typos! That's a smart proxy measure for reasoning quality. Nice one.

Collapse
 
shlomofr profile image
Shlomo Friman

The "avoiding friction" admission is the most honest line here. Friction and thinking are often the same thing and reaching for AI to skip one frequently skips the other. The archetype that's hardest to self-diagnose isn't the Passenger, it's the Autopilot Builder — the output still looks good, velocity stays high, and the comprehension erosion is invisible until you need to debug something unfamiliar at 2am without assistance.

Collapse
 
javz profile image
Julien Avezou

Very true. A lot would agree with this after experiencing a complex issue at 2am due to unfamiliar changes.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

honestly these aren't fixed types - I've been the cautious adopter on legacy refactors and full-power user on greenfield in the same sprint. the archetype shifts with the risk profile of the task.

Collapse
 
javz profile image
Julien Avezou

True, these types just provide a sense of awareness over time. You can definitely shift between types based on the task type. The trakcer takes this into account as the cognitive cost is more substantial for tasks with a higher risk profile.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

exactly — risk profile overrides the default every time. greenfield gets full velocity, production state gets the cautious lens regardless of which type you default to.

Collapse
 
mnemehq profile image
Theo Valmis

The archetypes framing is useful because it pushes past the lazy 'AI good vs AI bad' debate. The split that ends up mattering most in practice is which devs treat AI output as a draft to challenge vs a finished artifact to accept. The former keep their codebase's invariants intact, the latter quietly erode them one merged PR at a time.

Collapse
 
javz profile image
Julien Avezou

Exactly, thanks for validating the approach.
'a draft to challenge vs a finished artifact to accept.' - I really like that framing

Collapse
 
peacebinflow profile image
PEACEBINFLOW

The line that landed for me was "sometimes I wasn't using AI because I needed leverage, I was using it because I wanted to avoid friction." That's a harder admission than it first appears, because friction and thinking are often the same thing wearing different names.

I've noticed a pattern in my own usage that doesn't map neatly onto the archetypes: I'll use AI heavily during the messy middle of a task — refactoring, generating test cases, drafting documentation — and then find myself pulling back and working in a much more manual, deliberate way once the shape of the solution becomes clear. It's almost like the AI's value isn't in replacing the thinking but in handling the cognitive busywork that surrounds it, so that when I do sit down to think hard, the surface is clear.

The risk I'm more worried about isn't the obvious one of delegating too much. It's the subtler erosion of what I'd call "debugging stamina" — the willingness to sit with something broken for forty minutes without reaching for a tool to shortcut the process. That tolerance for unresolved tension used to be where I learned the most. I'm not sure how to measure whether it's declining, which makes it an uncomfortable thing to track.

Collapse
 
javz profile image
Julien Avezou

I am glad this post resonated with you.

It's interesting you mention this AI usage pattern. I also find myself doing this towards the middle of a big task to do some heavy lifting such as refactoring, QA and auditing of my architectural decisions, documentation etc. This is sandwiched by manual steering of the task at the start and finish.
I would argue that this is actually what an AI Architect archetype would do as you would be aware and know when to increase and decrease use of AI and leverage its capabilities when it matters and is less risky.

Interesting mention about "debugging stamina". I agree that these moments build patience, resilience and push a deeper understanding of the codebase that wouldn't otherwise be possible. In my tracker I include non-AI blocker hours as a proxy metric for this. This could be a way to measure how this ability overrides over time.

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
rushanksavant profile image
Rushank Savant

Interesting take. I just wrote about the 'Machine Identity' crisis in RAG agents—I think we're underestimating the security debt we're building right now.

Collapse
 
javz profile image
Julien Avezou

Thanks @rushanksavant . I had a look at your post. I agree that setting security boundaries is critical in agentic worfklows. We are seeing too many headlines with security breaches/issues recently.

Collapse
 
npp555 profile image
Nals

I can relate to the smartphone analogy. It's such a powerful tool to have at hand but I also need to stay aware of how I am using it to avoid too much dependence.

Collapse
 
javz profile image
Julien Avezou

Exactly @npp555 , you can adapt your behavior and it all starts with awareness