Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.
The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.



I did see someone write a post about Chat Oriented Programming, to me that appeared successful, but not without cost and extra care. Original Link, Discussion Thread
Successful in that it wrote code faster and its output stuck to conventions better than the author would. But they had to watch it like a hawk and with the discipline of a senior developer putting full attention over a junior, stop and swear at it every time it ignored the rules that they give at the beginning of each session, terminate the session when it starts doing a autocompactification routine that wastes your money and makes Claude forget everything. And you try to dump what it has completed each time. One of the costs seem to be the sanity of the developer, so I really question if it’s a sustainable way of doing things from both the model side and from developers. To be actually successful you need to know what you’re doing otherwise it’s easy to fall in a trap like the CTO, trusting the AI’s assertions that everything is hunky-dory.
That perfectly describes what my day-to-day has become at work (not by choice).
The only way to get anywhere close to production-ready code is to do like you just described, and the process is incredibly tedious and frustrating. It also isn’t really any faster than just writing the code myself (unless I’m satisfied with committing slop) and in the end, I still don’t understand the code I’ve ‘written’ as well as if I’d done it without AI. When you write code yourself there’s a natural self-reinforcement mechanism, the same way that taking notes in class improves your understanding/retention of the information better than when just passively listening. You don’t get that when vibe coding (no matter how knowledgeable you are and how diligent you are about babysitting it), and the overall health of the app suffers a lot.
The AI tools are also worse than useless when it comes to debugging, so good fucking luck getting it to fix the bugs it inevitably introduces…
For debugging there is the Google antigravity method: there can’t be bugs if it wipes the whole drive containing your project (taps head)