I disappeared from dev.to for 6 months. I came back with a new job, 5 products, and a monolith.
Last September, I published a post about mind mapping and quietly walked away from dev.to.
No announcement. No farewell post. I just stopped showing up. Six months later, I am back - with a new role, a hard-won architectural lesson, and a lot to say about where AI engineering is actually heading.
Let me start with the thing that will probably annoy some of you.
While everyone moved to microservices, I moved to a monolith
Over the last couple of years, I have quietly built five separate products. Different repos, different stacks, different deployment pipelines. Each one made sense in isolation. Each one had its own auth system, its own UI component library, its own API client layer.
You can probably see where this is going.
I was maintaining five versions of the same login flow. Five slightly-different button components. Five copies of the same utility functions, each with their own subtle bugs. Every time I fixed something in one product, I had to remember to port it to the others - and I usually didn't.
The real cost of duplication is not the extra code. It is the extra decisions. Every divergence is a future debugging session you have already scheduled without knowing it.
The conventional wisdom right now is to decompose everything - break your monolith, split your services, distribute your teams. And for the right scale, that is genuinely correct advice. But I am one person with five interconnected products, not Netflix with 500 engineers.
So I did the opposite. I consolidated.
I extracted a shared packages layer - a single source of truth for auth, UI components, API clients, and utilities - published internally, versioned with semver (a numbered versioning system so each product knows exactly how risky an upgrade is), and consumed by all five products via a private registry. Each product still has its own repo, its own deployment, its own release cycle. But the shared logic lives in one place and flows down.
Here is what the architecture looks like in practice:
- Hotfix in one product? One repo, one commit, one PR - fix the folder you need and everything else stays untouched
- Fix in the shared package? Bump the version, each product upgrades on its own schedule
- CI publishes automatically on merge to main
- A bot opens upgrade PRs when a new version drops
It is not a monolith in the traditional sense. It is closer to what a well-run open source organisation does - shared packages, independent consumers. The key insight was that shared logic and independent deployments are not mutually exclusive. You don't have to choose between sharing code and moving fast.
This architecture decision saved me probably 30% of my maintenance overhead and made me feel like a sane person again.
The new role: AI software engineer
The other big thing that happened during my hiatus: I transitioned into a dedicated AI Software Engineer role.
I want to be careful about what that phrase actually means, because it gets used loosely. It does not mean I prompt ChatGPT and call it a day. It means I am building systems where language models are runtime components - where the output of an LLM is an input to another function, where agents take actions, where the line between "the app" and "the model" is genuinely blurry.
A few things I have learned from doing this for real, in production:
Reliability engineering for AI systems is a different discipline. When a deterministic function breaks, you get a stack trace. When an LLM component drifts, you get subtly wrong outputs that look plausible. You need evals - automated test suites that score model outputs - in the same way you need unit tests for regular code. Most teams skip this until something goes wrong in production. Don't skip it.
Context window management is the new memory management. Every token you send to a model costs money and latency. Designing prompts and retrieval pipelines to be efficient - giving the model exactly what it needs to do its job, no more - is a real engineering skill that took me longer to appreciate than it should have.
The best AI products are not the ones with the most powerful models. They are the ones with the tightest feedback loops between what the user wants and what the model is asked to do. Prompt engineering is mostly UX design in disguise.
What I observed from the sidelines
Being mostly offline from the tech discourse for six months gave me an interesting vantage point. A few things stood out when I resurfaced:
Agentic coding moved from demo to daily workflow faster than almost anyone predicted. Claude Code, Cursor Agents, and similar tools are now genuinely part of serious engineering pipelines - not as autocomplete, but as collaborators that run code, read errors, and iterate. I am using these tools every day across my side projects and the productivity shift is real.
The React ecosystem quietly found its footing. The RSC debates have largely settled into working patterns. Next.js 15 is stable and predictable in production. Tailwind v4 landed and the community moved on. The chaos I remember from early 2025 has become a relatively calm set of established patterns - which is actually great news for anyone building products.
The "AI replaces developers" narrative peaked and deflated. What replaced it is more accurate: AI changes what your time as a developer is worth. Time spent on boilerplate drops. Time spent on architecture, product thinking, and system design becomes more valuable. The engineers I know who are thriving right now are not the ones who avoided AI tools - they are the ones who learned to direct them.
Why I am back
Honestly? I missed thinking out loud in public.
Writing here forced a kind of clarity that private notes never quite replicate. When you know someone might read it, you have to actually finish the thought. You can't trail off into vagueness. That discipline made me a better engineer, and I let it slip.
Going forward, I would rather publish one post every two weeks that says something I genuinely believe than four posts a week that summarise things you could read anywhere.
The shared packages architecture I described above is worth a full deep-dive post with code. So is building reliable evals for production AI systems. So are a dozen other things I have been sitting on.
If you have been on a similar hiatus - or if you are in the middle of one right now - I would love to hear about it in the comments. And if you stuck around from before: thank you. It genuinely means something.
Let's build.




Top comments (7)
nobody successful split their monolith to follow a trend - they did it because of team size or deployment independence. if those conditions don't apply, a monolith isn't contrarian. it's just correct.
Yeah, that is fair, I agree with you.
What I was trying to get at is what happens after microservices became popular. A lot of smaller teams (me included at one point) start using that approach before they actually need it.
In my case, the bigger issue was not team size or deployments, it was repeating the same code across multiple products. Having a shared, modular setup solved that better than splitting things up further.
So yeah, I don’t think it is really monolith vs microservices, it is more about picking what fits your situation instead of following what works for bigger teams.
honestly that's the part that gets me the most - the cargo-culting. I've seen teams spin up 8 services because netflix does it, then spend 6 months debugging distributed tracing before shipping a single feature. the pattern made sense at netflix scale. it rarely translates directly.
I've been working with monoliths for two years for small and medium-sized clients, and everything is working great. They don't have enough resources to maintain a microservices architecture, so we've moved forward with modular monoliths. Life is good.🍺
Totally agree, Freddy .! Modular monoliths are often underrated for small and medium teams. Microservices may sound appealing, but the operational overhead can be significant. Glad to hear it is working well for your clients.
Thank you for the post very insightful. I am burning to upgrade my hardware so i can test using AI the way you described.
Thanks, Said I, really appreciate it .! You do not need powerful hardware to get started. Tools like Claude Code and Cursor run in the cloud, so your local machine is not that important. Give it a try and let me know how it goes .!