11 Apr 2025
On larger systems, with larger teams, I spend more time reading code than writing it.
And reading code is harder than writing it, in the same way that learning to hear a new language is harder than learning to speak it.
(That's why writing simple, obvious code is a good idea. Otherwise, if you were only just clever enough to write it, you're not going to be clever enough to read it, later.)
At least on small personal projects, I'm having some success with vibe-coding. "Success" here means "I prompt the feature, Cursor writes it, and it works". The code isn't even usually that bad1.
But at that point I have two options: I can either take the time to read the code, symbol by symbol, line by line, and fully understand it. That has a cost in time and cognitive energy that is often higher than just writing it myself would have been.
Or I can do a quick readthrough and YOLO it into prod. The price of this is that since I've never properly understood the new code, my mental model of the system - what we call "context" at work - hasn't been updated.
Before I can hand-write any code in future, I have to load (or update) that context. Context-loading time is why we try to avoid task-switching at work.
The impact of taking the second option is that there's a strong disincentive to hand-write anything from that point onwards. The more you vibe-code, the stronger the incentive to keep vibe-coding, so that you don't have to pay the startup cost of context-loading. And because AI is (currently) bad at architecture, you eventually get to the point where you couldn't switch back to hand-writing even if you wanted to.
There's also a second-order cost I'm a little worried about. Coding skills rust quickly if you're not using them.2 Pilots perform manual landings even though autoland is pretty good, just so that they can, if they suddenly have to. If you're rusty enough, you won't spot when the AI does something "rare, but incredibly stupid", which does seem to be its failure mode.
Here's my (evolving) strategy for handling this:
Decide upfront whether I care about this codebase. If it's long-lived, complex, critical, or I'm being paid to write it, I probably do. If it's a quick personal hack to get something done (and not for the joy of learning or discovery), I probably don't.
If I don't care about the codebase, YOLO. I have no context and will never load any more than I absolutely have to. Prompt the feature at a very high level. Maybe prompt some tests. Loop failures or errors back into the AI without even really reading them. Don't read the code, just run it. Get hands-on only if the AI gets stuck in a loop it can't break out of by itself. The AI is not only the chef, it's very nearly the restaurant manager.
If I do care about the codebase, think deeply about the feature and implementation, using the full context I already have loaded and am committed to maintaining. Prompt the AI in stages, often by writing function headers and telling it to implement them. Read and understand every line, or hand-write if it's going to be faster (which it might in some places; the AI types faster and doesn't have to look up syntax, but I know exactly what I want and won't go off-track). I'm cooking this. The AI can chop carrots and wash dishes.
The space is evolving quickly, and I'm seeing positions being taken from "10x or 100x increase in productivity coming soon, or here already" through to "this is useless and I don't use it". For my particular combination of skills/domain/language, I think it's in the middle - I'd guesstimate that Cursor+Claude3.7 is making me noticeably better at what I do, but it's not revolutionising my life.
Even that, though, is economically Quite Something. They're charging me $20/mo. At $200/mo I wouldn't blink, and at $1000/mo I might still be able to make the business case.
AI is surprisingly good at writing code. It's just that writing code was never the hard bit.