The repeated use of the phrase “it works” is unhelpful. What the author means is “it appears to work.”
There is a vast difference between actually working and looking superficially like it works.
This is a massive testing problem.
This makes me think of eternal September which I'd say the author would argue we've reached with respect to coding.
I think this author makes the mistake, as many people do when they project AI trends forward, of ignoring the feedback mechanisms.
> Third, and I think scariest: it means that programming (and the craft of software) will cease to evolve. We'll stop asking "is there a better way to do this" and transition to "eh, it works." Instead of software getting better over time, at best, it will stagnate indefinitely.
“Eh, it works” isn’t good enough in a competitive situation. Customers will notice if software has weird bugs, is slow, clunky to use, etc. Some bugs can result in legal liability.
When this happens, other firms will be happy to build a better mousetrap to earn the business, and that’s an incentive against stagnation.
Of course, the FAANG type companies aren’t very competitive. But their scale necessitates serious engineering efforts: a bad fuck-up can be really bad, depending on what it is.
> Chauffeur Knowledge
Going into this piece, I expected an analogy where the user is like an out-of-touch wealthy person, who builds a shallow model of the world from what they hear from their LLM chauffeur or golf-caddy.
That is something I fear will spread, as people give too much trust to the assistant in their pocket, turning to it at the expense of the other sources of information.
> That's when it hit me: this is going to change everything; but not in the utopian "everything is magical" sense, but in the "oh, God, what have we done" sense.
I think of it like asbestos, or leaded-gasoline. Incredibly useful in the right situation, but used so broadly that we regret it later. (Or at least, the people who didn't make their fortunes selling it.)