I keep hearing this and then try using windsurf with the latest Anthropoc models to perform simple refactorings. There’s often goofy mistakes like hallucinations that mistakenly remove imports. Will check back next year…
Related: Behind the Curtain: A white-collar bloodbath https://news.ycombinator.com/item?id=44115407
He is playing the CEO 101 game, trying to convince investors to pump more money into his AI company.
What would you expect from AI model creating company CEO who figured their models were better with coding. These were the people who wanted alignment :)
In other words, it's getting really good at breaking your software!
I feel like there's an adjacent law to https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline... which states that any headline containing "could" is the same as "probably won't"
> Click here and you could win $100
If AI lab leaders and researchers really believe this, and continue working on the products that they believe will make it happen, does that make them psychopaths?
I use Claude daily, and I am not saying this as a hot take, or burn. I have been genuinely thinking about how this works in one's brain.
Now that we all have stoves at home, all restaurants will be going out of business any day now.
>"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,"
Thank you for being so honest Mr CEO. What a great guy.
/s
It continues two trends dating back well before ChatGPT.
1. Companies' declining investment in "developing people to senior". It's been declining for decades.
2. Self-education becoming key to "finishing" your education. College can't reasonably provide a complete education with the complexity and pace of software. New frameworks, CI, git, just all sorts of things aren't in curricula. University starts with Von Neumann, bubble sort, & big-O and has to proceed forward from there. Luckily today's kids have infinitely-patient LLMs! And insane amounts of content from youtubers! And infinite distribution! It's easier than ever to put your work out there and have it be seen. Kids can apply to jobs adding links to their portfolios and their open-source and show their chops that way, meaning companies need to lean less on interviews.
> "AI isn't stealing job categories outright — it's absorbing the lowest-skill tasks," Doshay said. "That shifts the burden to universities, boot camps, and candidates to level up faster."
Taking on interns and junior devs used to be part of the deal for tech companies that wanted the best talent. Now they can just look at kids' public portfolios and pluck the best ones.
It's a brave new world built on public personas where everyone is their own CEO and it's not for everyone. That's where the race comes in.
Once companies realize only so many AIs can be overseen by one person, they'll hire anyone and everyone who can babysit AIs to produce what the company needs - the more AI you can babysit the more valuable you are. Companies will become desperate for talent to put the compute to work. Jevons Paradox at full tilt.
Young guns WILL succeed in this environment. They'll learn on their own time and dime. It was never easier thanks to LLMs with infinite patience and youtubers providing deep explanations.
But it's not entry level software engineering. It's seat-of-your-pants learning and moving fast, running and gunning to get a thing built. Quality guardrails like PRs, code review, tests and such are more important than ever - installing and instilling is where you as senior dev can shine.