The gap between quality work and baseline LLM output is precisely the understanding.
If it can be validated by automation, the bot will do it. But no automation suite is complete or perfect.
What concerns me is that building software using the LLMs gives a distance that inhibits the formation of the sort of understanding I need to "just know" a code base intuitively. So when product asks for a feature, my ability to be sufficiently pedantic about the 6 different non-obvious things this impacts is less effective. And when I need to choose abstractions and try to form an effective ontology, my intuition is less effective. I believe I can still grind out an effective solution, but I start farther from the finish line.
Does the LLM's ability to "answer questions" about the codebase make up for my lack of intuition? Does my apparent ability to run faster make up for the fact that I am starting farther from the end of the race?
I don't know yet.