This mirrors insights from Andrew Ng's recent AI startup talk [1].
I recall he mentions in this video that the new advice they are giving to founders is to throw away prototypes when they pivot instead of building onto a core foundation. This is because of the effects described in the article.
He also gives some provisional numbers (see the section "Rapid Prototyping and Engineering" and slides ~10:30) where he suggests prototype development sees a 10x boost compared to a 30-50% improvement for existing production codebases.
This feels vaguely analogous to the switch from "pets" to "livestock" when the industry switched from VMs to containers. Except, the new view is that your codebase is more like livestock and less like a pet. If true (and no doubt this will be a contentious topic to programmers who are excellent "pet" owners) then there may be some advantage in this new coding agent world to getting in on the ground floor and adopting practices that make LLMs productive.
There are some things that you still can't do with LLMs. For example, if you tried to learn chess by having the LLM play against you, you'd quickly find that it isn't able to track a series of moves for very long (usually 5-10 turns; the longest I've seen it last was 18) before it starts making illegal choices. It also generally accepts invalid moves from your side, so you'll never be corrected if you're wrong about how to use a certain piece.
Because it can't actually model these complex problems, it really requires awareness from the user regarding what questions should and shouldn't be asked. An LLM can probably tell you how a knight moves, or how to respond to the London System. It probably can't play a full game of chess with you, and will virtually never be able to advise you on the best move given the state of the board. It probably can give you information about big companies that are well-covered in its training data. It probably can't give you good information about most sub-$1b public companies. But, if you ask, it will give a confident answer.
They're a minefield for most people and use cases, because people aren't aware of how wrong they can be, and the errors take effort and knowledge to notice. It's like walking on a glacier and hoping your next step doesn't plunge through the snow and into a deep, hidden crevasse.
Since agents are good only at greenfield projects, the logical conclusion is that existing codebases have to be prepared such that new features are (opinionated) greenfield projects - let all the wiring dangle out of the wall so the intern just has to plug in the appliance. All the rest has to be done by humans, or the intern will rip open the wall to hang a picture.
AI is an interpolator, not an extrapolator.
The learning-with-AI curve should cross back under the learning-without-AI curve at towards the higher end even without "cheating".
The very highest levels of mastery can only come from slow, careful, self-directed learning that someone in a hurry to speedrun the process isn't focusing on.
I agree with most of TFA but not this:
> This means cheaters will plateau at whatever level the AI can provide
From my experience, the skill of using AI effectively is of treating the AI with a "growth mindset" rather than a "fixed" one. What I do is that I roleplay as the AI's manager, giving it a task, and as long as I know enough to tell whether its output is "good enough", I can lend it some of my metagcognition via prompting to get it to continue working through obstacles until I'm happy with the result.
There are diminishing returns of course, but I found that I can get significantly better quality output than what it gave me initially without having to learn the "how" of the skill myself (i.e. I'm still "cheating"), and only focusing my learning on the boundary of what is hard about the task. By doing this, I feel that over time I become a better manager in that domain, without having to spend the amount of effort to become a practitioner myself.
This is exactly how I've been seeing it. If you're deeply knowledgable in a particular domain like lets say compiler optimization I'm unsure if LLM's will increase your capabilities (your ceiling), however, if you're working in a new domain LLMs are pretty good at helping you get oriented and thus raising the floor.
The greatest use of LLMs is the ability to get accurate answers to queries in a normalized format without having to wade through UI distraction like ads and social media.
It's the opposite of finding an answer on reddit, insta, tvtropes.
I can't wait for the first distraction free OS that is a thinking and imagination helper and not a consumption device where I have to block urls on my router so my kids don't get sucked into a skinners box.
I love being able to get answers from documentation and work questions without having to wade through some arbitrary UI bs a designer has implemented in adhoc fashion.
This tracks for other areas of AI I am more familiar with.
Below average people can use AI to get average results.
I think a good way to see it is "AI is good for prototyping. AI is not good for engineering"
To clarify, I mean that the AI tools can help you get things done really fast but they lack both breadth and depth. You can move fast with them to generate proofs of concept (even around subproblems to large problems), but without breadth they lack the big picture context and without depth they lack the insights that any greybeard (master) has. On the other hand, the "engineering" side is so much more than "things work". It is about everything working in the right way, handling edge cases, being cognizant of context, creating failure modes, and all these other things. You could be the best programmer in the world, but that wouldn't mean you're even a good engineer (in real world these are coupled as skills learned simultaneously. You could be a perfect leetcoder and not helpful on an actual team, but these skills correlate).
The thing is, there will never be a magic button that a manager can press to engineer a product. The thing is, for a graybeard most of the time isn't spent around implementation, but design. The thing is, to get to mastery you need experience, and that experience requires understanding of nuanced things. Things that are non-obvious. There may be a magic button that allows an engineer to generate all the code for codebase, but that doesn't replace engineers. (I think this is also a problem in how we've been designing AI code generators. It's as if they're designed for management to magically generate features. The same thing they wish they could do with their engineers. But I think the better tool would be to focus on making a code generator that would generate based on an engineer's description.
I think Dijkstra's comments apply today just as much as they did then[0]
[0] On the foolishness of "natural language programming" https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
In things that I am comparatively good at (e.g., coding), I can see that it helps 'raise the ceiling' as a result of allowing me to complete more of the low level tasks more effectively. But it is true as well that it hasn't raised my personal bar in capability, as far as I can measure.
When it comes to things I am not good at at, it has given me the illusion of getting 'up to speed' faster. Perhaps that's a personal ceiling raise?
I think a lot of these upskilling utilities will come down to delivery format. If you use a chat that gives you answers, don't expect to get better at that topic. If you use a tool that forces you to come up with answers yourself and get personalized validation, you might find yourself leveling up.
AI raises everything - the ceiling is just being more productive. Productivity comes from adequacy and potency of tools. We got a hell of a strong tool in our hands, therefore, the more adequate the usage, the higher the leverage.
Most non-trivial expertise topics are not one-dimensional. You might be at the "ceiling" in some particular sub-niche, while still on the floor on other aspects of the topic.
So even if you follow the artcles premise (I do not), it still can potentially 'raise' you wherever you were.
Key seems to be wether you have enough expertise to evaluate or test the outputs. Some others have refered to this as having a good sense of the "known/unknown" matrix for the domain.
The AI will be most helpful for you in the known-unknown / unknown-known axis, not so much in the known-known / unknown-unknown parts. The latter unfortunatly is were you see the most derailed use of the tech.
AI is going to cause a regression to the most anodyne output across many industries. As humans who had to develop analytical skills, writing skills, etc., we struggle to imagine the undeveloped brains of those who come of age in the zero-intellectual-gravity world of AI. OpenAI's study mode is at best a fig leaf.
edit: this comment was posted tongue-in-cheek after my comment reflecting my actual opinion was downvoted with no rebuttals:
I'm not sure this is my experience so far. What I'm noticing is that my awesome developers have embraced AI as an accelerator, particularly for research and system design, small targeted coding activities with guardrails. My below average developers are having difficulty integrating AI at all in their workflow. If this trend continues the chasm between great and mediocre devs will widen dramatically.
In one sense it's a floor-lowerer, since it lowers the floor on how clueless you can be and still produce something loosely describably as software.
At least the last coding-with-AI chart is still too optimistic, I think. It doesn't reflect how AI coding tools are making developers less productive (instead of more) in non-trivial projects.
Really liked this article.
I wonder: the graphs treat learning with and without AI as two different paths. But obviously people can switch between learning methods or abandon one of them.
Then again, I wonder how many people go from learning about a topic using LLMs to then leaving them behind to continue the old school way. I think the early spoils of LLM usage could poison your motivation to engage with the topic on your own later on.
I think all of this is true, but the shape of the chart changes as AI gets better.
Think of how a similar chart for chess/go/starcraft-playing proficiency has changed over the years.
There will come a time when the hardest work is being done by AI. Will that be three years from now or thirty? We don't know yet, but it will come.
So speaking of "mastery":
I wanted to know how to clone a single folder in a Git repository. Having done this before, I knew that there was some incantation I needed to make to the git CLI to do it, but I couldn't remember what it was.
I'm very anti-AI for a number of reasons, but I've been trying to use it here and there to give it the benefit of the doubt and avoid becoming a _complete_ dinosaur. (I was very anti-vim ages ago when I learned emacs; I spent two weeks with vim and never looked back. I apply this philosophy to almost everything as a result.)
I asked Qwen3-235B (reasoning) via Kagi Assistant how I could do this. It gave me a long block of text back that told me to do the thing I didn't want it to do: mkdir a directory, clone into it, move the directory I wanted into the root of the directory, delete everything else.
When I asked it if it was possible to do this without creating the directory, it, incorrectly, told me that it was not. It used RAG-retrieved content in its chain of thought, for what that's worth.
It took me only 30 seconds or so to find the answer I wanted on StackOverflow. It was the second most popular answer in the thread. (git clone --filter=tree: --depth=0, then git sparse-checkout set --no-cone $FOLDER, found here: https://stackoverflow.com/a/52269934)
I nudged the Assistant a smidge more by asking it if there was a subcommand I could use instead. It, then, suggested "sparse-checkout init", which, according to the man page for this subcommand, is deprecated in favor of "set". (I went to the man page to understand what the "cone" method was and stumbled on that tidbit.)
THIS is the thing that disappoints me so much about LLMs being heralded as the next generation of search. Search engines give you many, many sources to guide you to the correct answer if you're willing to do the work. LLM services tell you what the "answer" is, even if it's wrong. You get potential misinformation back while also turning your brain off and learning less; a classic lose-lose.
The second derivative of floor raiser is ceiling raising.
AI will be both a floor and a ceiling raiser, since there is a practical limit to how many domains one person or team can be expert in, and AI does/will have very strong levels of expertise/competency across a large number of domains and will thus offer significant level-ups in areas where cross-domain synthesis is crucial or where the limits of human working memory and pattern recognition make cross-domain synthesis unlikely to occur.
AI also enables much more efficient early stage idea validation, the point at which ideas/projects are the least anchored in established theory/technique. Thus AI will be a great aid in idea generation and early stage refinement, which is where most novel approaches stall or sit on a shelf as a hobby project because the progenitor doesn't have enough spare time to work through it.
Wouldn't it be both by this definition? It raises the bar for people who maybe have a lower IQ ("mastery"), but people who can us AI can then do more than ever before, raising the ceiling as well.
The blog still assumes that AI does not affect Mastery. I think it does.
All the AI junk like Agents in Service centers that you need to outplay in order to get in touch with a human, us as consumers are accepting this new status quo. We will accept products that sometimes can do crazy stuff because of hallucinations. Why? Ultimate capitalism consumerism sheepism, some other ism.
So AI (and whether it is correlarion or causation, i dont know) also corresponds with lower level of Mastery
Oh man i love this take. It's how I've been selling what I do when I speak with a specific segment of my audience: "My goal isn't to make the best realtors better, it's to make the worst realtors acceptable".
And my client is often the brokerage, they just want their agents to produce commissions so they make a cut. They know their top producers probably wont get much from what I offer, but we all see that their worst performers could easily double their business.
AI is not a floor raiser
It is a false confidence generator
Mixing this with a metaphor from earlier: giving a child a credit card is also a floor raiser.
People should be worried because right now AI is on an exponential growth trajectory and no-one knows when it will level off into an s-curve. AI is starting to get close to good enough. If it becomes twice as good in seven months then what?
Only the first two mastery-time graphs make sense.
Only for the people already affluent enough to afford the ever-more expensive subscriptions. Those most in need of a floor-raising don’t have the disposable income to take a bet on AI.
It’s definitely about wage stagnation.
AI isn't a pit. AI is a ladder.
AI is chairs.
I'd argue that AI reduces the distance between the floor and the ceiling, only both the floor and ceiling move -- the floor moves up, the ceiling downwards. Just using AI makes the floor move up, while over-reliance on it (a very personal metric) pushes the ceiling downwards.
Unlike the telephone (telephones excited a certain class of people into believing that world-wide enlightenment was on their doorstep), LLMs don't just reduce reliance on visual tells and mannerisms, they reduce reliance on thinking itself. And that's a very dangerous slope to go down on. What will happen to the next generation when their parents supply substandard socially-computed results of their mental work (aka language)? Culture will decay and societal norms will veer towards anti-civilizational trends. And that's exactly what we're witnessing these days. The things that were commonplace are now rare and sometimes mythic.
Everyone has the same number of hours and days and years. Some people master some difficult, arcane field while others while it away in front of the television. LLMs make it easier for the television-watchers to experience "entertainment nirvana" while enticing the smart, hard-workers to give up their toil and engage "just a little" rest, which due to the insidious nature of AI-based entertainment, meshes more readily with their more receptive minds.
AI is a wall raiser.
AI is a floor destroyer not a ceiling destroyer. Hang on for dear life!! :P
I guess the AI glazing has infiltrated everywhere
At the very least
I was thinking about this sentiment on my long car drive today.
it feels like when you need to paint walls in your house. If you've never done it before you'll probably reach for tape to make sure you don't ruin the ceiling and floors. the tape is a tool for amateur wall painters to get decent results somewhat efficiently compared to if they didn't. If your an actual good wall painter, tape only slows you down. You'll go faster without the "help".
You'll find many people lack the willpower and confidence to even get on the floor though. If it weren't for that they'd already know a programming language and be selling something.
AI is a shovel capable of breaking through the bottom of the barrel.
OP doesn't understand that almost everything is neither at the floor or the ceiling.
I mean it makes sense that if the AI is trained on human-created things it can never actually do better than that. Can't bust through the ceiling of what it was trained on. And at the same time, it AI gives that power to people that just aren't very smart or good at something.
[dead]
[flagged]
The blog post has a bunch of charts, which gives it a veneer of objectivity and rigor, but in reality it's just all vibes and conjecture. Meanwhile recent empirical studies actually point in the opposite direction, showing that AI use increases inequality, not decrease it.
https://www.economist.com/content-assets/images/20250215_FNC...
https://www.economist.com/finance-and-economics/2025/02/13/h...