> The machine is real. The silicon is real. The DRAM, the L1, the false sharing, the branch predictor flipping a coin—it’s all real. And if you care, you can work with it.
This is one of the most beautiful pieces of writing I’ve come across in a while.
The things that's most often missed in these discussions that "writing code" is the end artefact. It doesn't take into account the endless tradeoffs made in producing the said artefact - the journey to get there.
Just try implementing a feature with a junior, in a mildly complex codebase and you'd catch all the unconscious tradeoffs that you're making as an experienced developer. AI has some concept of what these tradeoffs are, but that's mostly by observation.
AI _does_ help with writing code. Keyword there being - "help".
But thinking is the human's job. LLMs can't/don't "think". Thinking how to get the AI to produce the output you want is also your job. You'd think less and less if models get better.
This is the crux of the piece to me:
"We'll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software—and the idea of squeezing every last drop of performance out of a system, or building something lean and wild and precise, will sound like folklore."
This somewhat lines up with my concerns about libraries and patterns before 2023 getting frozen in stone once we pass over the event horizon where most new code to train on is generated by LLMs. We aren't innovating, we are going to forever reinforce the screwed up dependency stack and terrible kludges of the last 30 years of development. Javascript is going to live forever.
This resonates with me, for sure; both the benefits and the drawbacks of copilot. But while I think kids and hackers were artisans, engineers were always just engineers. The amazing technical challenges they had to solve to create some of the foundational technologies we have today, exist because they had to solve those challenges. Looking at only these and saying "that's how things used to be" is survivorship bias.
I feel this in my bones. Every day I'm getting challenged by leadership that we're not using AI enough, told that I should halve my estimates because "we'll use AI", and being told that there's a new AI tool that I have to adopt because someone is tracking KPIs related to adoption and if our team doesn't adopt enough AI tools we're going to be fired to give more headcount to those that do.
It's like the world has lost it's goddamn mind.
AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.
Management has an AI shaped hammer and they're hitting everything to see if it's a nail.
I think the difference between A.I. fake intelligence and us, humans, can be summed up to that single quote from Oscar Wilde.
"I have spent most of the day putting in a comma and the rest of the day taking it out."
No A.I. would ever think more than a millisecond about a comma, it's pure data retrieval for it. "how many percent of text there is a coma after this word, how many didn't ? ok, done."
My point of comparison of choice is overseas contractors, not pair programming.
Copilot or Cursor or whatnot is basically a better experience because you do not have to get on Zoom calls (after Slack has failed) to ask why some chunk of your system that cares about root nodes has mysteriously gained a function called isChild (not hasChildren) that returns a boolean based on whether or not the node has children and not whether it has a parent. Or to figure out why a bunch a API parameters that used to accept arrays now don't. Or why an ask to not show a constant literal in a menu resulted in algorithmic derivation of ordinals rather than using i18n.
With AI you probably don't have those kinds of things happen, but if you do, you can instantly tell it, sorry, that's wrong, this is why, and have it changed in a minute. Whereas with contractors, you waste a lot of time on things like communication and understanding gaps and language barriers that are mostly gone with AI.
The second you can interact really easily w/ AI from Jira Tickets, most engineers are going to turn into ticket writers and overseers for 80% of their work. (And yes, you'll still need engineers, because Product can't actually write decent engineering tickets, though telling the AI to write engineering tickets will probably get close, and because somebody with a clue needs to be in the loop, though many organizations will try to forget this and have things they don't understand go terribly wrong.)
The author is clearly a C++ programmer. I've been noticing that these AI tools are worse at C++ than other languages, especially scripting languages. Whenever I try to learn from people that are using these tools successfully, they always seem to be using a scripting language and working on some CRUD app.
I couldn't help but read parts of this in Bertram Gilfoyle's voice.
Someone tell me I'm not alone.
I used to work with someone like this. At first, he really wanted to do things properly. Over time, he gave up. Not because he was lazy, but because he felt like effort didn’t really matter.
Copilot’s fine for boilerplate. But lean on it too much, and you stop thinking. Stop thinking long enough, and you stop growing. That’s the real cost.
I think all arguments pro and against AI assistants for coding should include a preface that describes the programing language, the domain of the app, the model being used and the chosen interface for interacting with the assistant.
Otherwise everyone's just talking past each other.
i think the key is always having the ability to telescope - think coding agents enable you to stay high level, but you always need the ability to go down and fix/understand code when needed.
As a preface, I think lots of people will not like this take.
A lot of people are going to have to come to the realization that has already been mentioned before but many find it hard to grasp.
Your boss, stakeholders, and especially non-technical people literally give 0 fucks about "quality code" as long as it does what they want it to do. They do not care about tests insofar as if it works it works. Many have no clue about nor do they care about whether something just refetches the world in certain scenarios. And AI whether we like it or not, whether it repeats the same shit and isnt DRY, doesn't follow patterns, reinvents the wheel, etc - is already fairly good at that.
This is exactly why all your stakeholders and executives are pushing you to use it. they've been fed that it just gets shit done and pumps out code like nothing else.
I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers. If these obstacles were entirely removed and we went full bandaid off I do think AI even in its current state is fairly capable of replacing plenty of roles. But it does require a competent person to steer to not end up in a complete mess.
If you throw away the guardrails a little bit and not obsess about how nice code looks anymore, it absolutely will move things along faster than you could before.
It's right, AI does require giving up control and letting things be done differently, depends how much you really use it.
It's the same when you get a junior dev to work on things, it's just not how you would do it yourself and frequently wrong or naive. Sometime is brilliant and better than you would have done yourself.
That doesn't mean don't have junior devs, but having one doesn't mean you don't have to do corrective stuff and refinements to their work.
Most of us aren't changing the world with our code, we're contributing an incredibly small niche part of how it works. People (normal people, lol) only care what your system does for them, not how it works or how great the code is.
Beautiful and witty prose to say "vibe coding sucks". He's not at all wrong about the state of AI coding in May of 2025. The 3 hours I just burned trying to get it to correct output bugs in a marimo notebook (which I started learning this week) is demonstrable evidence.
But it completely ignores the fact that AI generated code is getting better on a ~weekly basis. The author acknowledges that it is useful in some contexts for some uses, but doesn't acknowledge that the utility is constantly growing. We certainly could plateau sometime soon leaving us in the reckless intern zone, but I wouldn't bet on it.
> The real horror isn’t that AI will take our jobs—it’s that it will let people in who never wanted the job to begin with.
I fully agree. This already happened with the explosion of DevOps bullshit, where people with no understanding of Linux got jobs by memorizing abstractions. “Stop gatekeeping,” they say. “Stop blowing up prod, and read docs” I fire back.
What a great essay, this make me laugh out loud as I'm finishing my week. Thanks for this masterpiece, just shared in some engineering channels in my company even the AI ones!
People think LLM is magic but its just engineering and we should treat like so, it has flaws, can be improved and in the end many times is just a tool for a job. Not the tool for all the jobs.
> AI has no concept of memory locality. No intuition for cache misses.
Not true at all but you have to ask it.
these tools have no understanding of clean architecture, they are like geeksforgeeks or w3cschools, decades old shitty "tutorials" written by amateurs for amateurs condensed into a chatbot. If you work on cleanly architectured code they can still be useful, they can see the patterns and usually perform better in my experience. But not in a million years will you get to that clean architecture by starting them off from scratch. Keep it turned off until you have a solid foundation would be my advice.
Despite that this post/blog about AI has some valid points, the truth of the matter is that a good experienced developer _can_ extract the power out of a Coding Agent to get 6 months worth of work done in 3 weeks, and even do it in a language he's never coded in before. AI just isn't AGI yet, so comparing it to a human developer doesn't make any sense, to me. On the other hand, I'd also say AI Coding Agents are "Superhuman" in their vast knowledge and code-writing abilities. This indeed does sound like a contradiction, but the world is nuanced enough for it not to be a contradiction, but just two things that are true in a nuanced way.
The real horror isn’t that AI will take our jobs—it’s that it will let people in who never wanted the job to begin with.
Gross. Also: you could have said this about the spreadsheet.
My favorite thing about this article is that it was released the exact same day as Claude 4 Opus.
If it's satire, it's brilliant: because most of the comments I see (here and elsewhere) are clearly written by people who tried agentic coding before Opus 4, and haven't given it a fair shake over the ensuing five days.
IMO the most important engineering skill in 2025 isn't low-level programming, or the craft of debugging, or even having a firm grasp of system architecture. Believe it or not, I truly believe the vibe-first juniors will learn that stuff too, over the course of their careers, just as we did: through necessity (As an aside: if you don't think they'll encounter such necessity, then it's inherently not one any more than the countless other once-honored, fastidious hallmarks of craft that have since been rendered obsolete. And if you don't think they'll learn even upon encountering a true necessity, then you underestimate them.)
No, the most important engineering skill in 2025 is non-attachment: constantly update your priors, and hold your opinions very loosely. Because those opinions could be fully wrong before the essay even gets shared.
I want everyone to join the programming club who wants to.
Ouch! Right in the feels
I came here to tell that the story was a rather delightful read regardless agreeing or not with the points being made.
I wonder if we'll look back on this period in a couple of years and feel a nostalgic fondness as we think of the fateful moment when people working in software were forced to pull the wool from their eyes and look at the fact that businesses really, really, really dislike losing huge amounts of money paying people to make the software their businesses completely depend on.
I mean, I'm guessing that's true. It'd make a lot of sense if they vehemently disliked that. It's hard to make sense of it all otherwise, really.
[flagged]
[flagged]
[flagged]
Sounds like it was written by ChatGPT, to be honest.
Clean code is only good for non-profit organization. Your paid to solve problem as fast as possible, not to code.
Adapt or die. Keep up on industry trends and learn how to (responsibly) use the tools to be a better programmer, or be unnaturally selected out.
Is the writing here intentionally bad? May be it's not my cup of tea but it's been a while since I've read something so cringey, it's difficult to finish.
I loved this article. But for some reason my gut is telling it will age like milk. Can you imagine how effective these coding agents will be in, say, 2036? The concept of coding things by hand for the sake of higher quality will seem so outdated
Not sure if he talked about photography, Desktop Publishing, spreadsheets, or some other labor-saving invention.
But what I heard over the din of whining was "It was hard for me, it should be hard for you". And... that's not how this or anything works. You get labor-saving stuff, you choose if you want to continue to solve hard problems, or if you want the same problems (which suddenly turned easy).
Yes, it's not perfect. Yes, you need to know how you use it, and misusing it causes horrible disfiguring incidents. Guess what, the same was true about C++. And C before it. And that new-fangled assembly stuff, instead of using blinkenlights like a real programmer. And computers instead of slide rules.
Up the complexity ladder we keep going.
I agree with this article. But, I do think it's important to understand that these tools do have value - especially when learning. I also think a lot of the issues raised will improve when we can increase context length. Googling problems, and having obsolete answers from 2007 also slow down progress, but we're not saying Google is worthless for serving those results.
These tools will get better, and they will eventually allow the best to extend their ability instead of both slowing them down and potentially encouraging bad practices. But it will take time, and increased context length. The world is full of people who don't care about best practice, and if that's all the task requires of them - keep on keeping on.
> if you want to sculpt the kind of software that gets embedded in pacemakers and missile guidance systems and M1 tanks—you better throw that bot out the airlock and learn.
But the bulk of us aren't doing that... We're making CRUD apps for endless incoming streams of near identical user needs, just with slightly different integrations, schemas, and lipstick.
Let's be honest. For most software there is nothing new under the sun. It's been seen before thousands of times, and so why not recall and use those old nuggets? For me coding agents are just code-reuse on steroids.
Ps. Ironically, the article feels AI generated.