Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the “vibe coder”, all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left – to the earlier stages of development when they’re cheap to address. Yet with vibe coding, they’re shifted very far to the right – when addressing them is expensive.
The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won’t have anything to do with vibe coding – at least the way it’s defined today.
I’m already sick of this phrase. The tech hallucinates too much, and understands security about as much as a whale understands astrophysics.
Everything an LLM outputs is a hallucination, and sometimes that hallucination lines up with reality.
That feeling when LLM matches your worldview.
Also, this. (Not LLM, but still beautifully disturbing.)