by agentultra 3 days ago

Valid concern and one I share. If you’re going to vibe code up an operating system you will still need the experience and understanding of operating system fundamentals to have a chance at producing anything useful.

The one-shot vibe-coded C compiler is a good example. Sure it created a compiler that could pass the basic tests. But it was no where near a plausible or useful compiler you’d use in a production system.

Someone who knows compilers reviewed it and was able to prompt Claude or Gemini to fix the issues. But still… you’re not going to be able to do that unless you know what to look for.

On an enterprise development team doing boring, Line of Business software? Might have a chance at rolling the dice and trusting the agents and tests and processes to catch stuff for you but I’d still be worried about people who don’t know what questions to ask or have deep expertise to know what is “good,” etc.

daotoad 2 days ago | [-0 more]

The gap between quality work and baseline LLM output is precisely the understanding.

If it can be validated by automation, the bot will do it. But no automation suite is complete or perfect.

What concerns me is that building software using the LLMs gives a distance that inhibits the formation of the sort of understanding I need to "just know" a code base intuitively. So when product asks for a feature, my ability to be sufficiently pedantic about the 6 different non-obvious things this impacts is less effective. And when I need to choose abstractions and try to form an effective ontology, my intuition is less effective. I believe I can still grind out an effective solution, but I start farther from the finish line.

Does the LLM's ability to "answer questions" about the codebase make up for my lack of intuition? Does my apparent ability to run faster make up for the fact that I am starting farther from the end of the race?

I don't know yet.