Building fast with AI is easy but building something you can actually run, inspect, and trust is harder. This project is my attempt to teach builde...
For further actions, you may consider blocking this person and/or reporting abuse
Wow, that's impressive Julien! Releasing a whole coding course generator as open-source takes a lot of guts. I love that you're focusing on core engineering intuitions, that's exactly the kind of foundational knowledge we need to build on. Observability is so important when working with AI, it's great to see you emphasizing that.
Thanks for the support Aryan! The hardest part of this projects was creating building blocks that felt intuitive for someone with zero coding experience prior.
not sure local-first solves the trust problem - it mostly moves it. you still have to trust that generated course content is correct. the harder part: verifying the output actually teaches the right thing, not whether you can run it locally.
Indeed, this was indeed a real challenge which is why I created a full concept dependency graph
I would argue local-first does contribute to the trust problem as it allows users control over their LLM usage and ability to make tweaks to the actual course if they want to
fair - editability does shift the equation, and a dependency graph makes gaps inspectable which most systems skip. still skeptical most learners will actually tweak content vs just accept what's generated, but that's a constraint on users not a flaw in your design
I completely understand with your skepticism. Let me clarify something else: when making this product local-first I was not only putting myself in the shoes of not only learners who wouldn't have enough confidence generally to tweak content themselves as they are learning, but also in the shoes of educators who might want to fork the repo and tweak course content on their end to offer their own learners
that reframes it for me - less about whether learners edit, more about building infrastructure for the whole confidence spectrum. clicks now.
I love how you went into this thinking, "AI will accelerate my implementation," and immediately pivoted to, "I am now the beleaguered manager of a terrifyingly fast junior dev who will absolutely delete the production database if I don't give them explicit architectural boundaries." The realization that AI doesn't remove the need for debugging discipline, but just generates the bugs at 100x speed, is the most relatable 2026 dev moment ever. Props for keeping the AI optional and local-first, though. Nothing scares me more than a beginner tutorial that requires a $20/month subscription just to console.log("Hello World") in a black box! 🤣
I am glad some of my architectural decisions resonated with you !
Completely agree, I love seeing products with optional and local-first AI features. It gives me much more trust in the product.
Wow! great job, Julien :)
Thanks Ben :)
no problem :). I can't wait for your next post.
Thanks, I have more coming :D
nice :)
The most useful signal in a learning product is not completion rate but what students attempt right after finishing a module. If they extend it, the concept landed. If they recreate it verbatim, transfer did not happen. That gap is invisible without instrumented checkpoints — which is exactly why your PR-based validation model is the right call.
Agree, thanks for validating this design choice.
Amazing..!
Thanks Ateeb!
That is really awesome congrats!
Thanks Saad!