> Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel;
It was probably around 7 years ago when I first got interested in machine learning. Back then I followed a crude YouTube tutorial which consisted of downloading a Reddit comment dump and training an ML model on it to predict the next character for a given input. It was magical.
I always see LLMs as an evolution of that. Instead of the next character, it's now the next token. Instead of GBs of Reddit comments, it's now TBs of "everything". Instead of millions of parameters, it's now billions of parameters.
Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine". Maybe throwing more compute and data at it will at some point make it so great that it's worthy to be called "AGI" anyway? I don't know.
Well anyway, thanks for the nostalgia trip on my birthday! I don't entirely share the same optimism - but I guess optimism is a necessary trait for a CEO, isn't it?
> although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly
If the "mistake" is that of concentrating too much power in too few hands, there's no recovery. Those with the willingness to adapt will not have the power to do so, and those with the power to adapt will not have the willingness. And it feels like we're halfway there. How do we establish a system of checks and balances to avoid this?
This read like a Philip K. Dick, Ubik-style advertisement for a dystopian future, and I’m pretty amazed it is an actual blog post by a corporate leader in 2025. Maybe Sam and Dario should be nominated for Hugos or something…
Some reasoning tokens on this post:
>Intelligence too cheap to meter is well within grasp
And also:
>cost of intelligence should eventually converge to near the cost of electricity.
Which is a meter-worthy resource. So intelligence effect on people's lives is in the order of magnitude of one second of a toaster use each day, in present value. This begs the question: what could you do with a toaster-second say 5 years from today?
Do you think we will get AI models capable of learning in real time, using a small number of examples similar to humans, in the next few years? This seems like a key barrier to AGI.
More broadly, I wonder how many key insights he thinks are actually missing for AGI or ASI. This article suggests that we've already cleared the major hurdles, but I think there are still some major keys missing. Overall his predictions seem like fairly safe bets, but they don't necessarily suggest superintelligence as I expect most people would define the term.
This level of conceitedness can hardly be measured anymore; it's on a new scale. Big corps will build and label whatever as "superintelligent" system, even if it has plain if conditions placed within to suit their owners interests.
It'll govern our choices, shape our realities, and enforce its creators' priorities under the guise of objective, superior intelligence. This 'superintelligence' won't be a benevolent oracle, but a sophisticated puppet – its strings hidden behind layers of complexity and marketing hype. Decisions impacting lives, resources, and freedoms will be made by algorithms fundamentally skewed by corporate agendas, dressed up as inevitable, logical conclusions.
The danger isn't just any bias; it's the institutionalization of bias on a massive scale, presented as progress.
We'll be told the system 'optimized' for efficiency or profit, mistaking corporate self-interest for genuine intelligence, while dissent gets labeled as irrationality against the machine's 'perfect' logic. The conceit lies in believing their engineered tool is truly autonomous wisdom, when it's merely power automated and legitimized by a buzzword. AI LETS GOOOOOOOOOOOOO
I started quickly reading the article without reading who actually wrote it. As I scanned over the things being said, I started to ask myself: Who wrote this? It's probably some AI proponent, someone who has a vested interest. I had to smile when I saw who it was.
> although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly
Famous last words.
The title is a likely nod to The Gentle Seduction by Marc Stiegler: http://www.skyhunter.com/marcs/GentleSeduction.html
This reminds me of Pat Gelsinger quoting the Bible on Twitter, lol.
Between this and Ed Zitron at the other end of the spectrum, Ed's a lot more believeable to be honest.
> The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.
> 2026 will likely see the arrival of systems that can figure out novel insights
Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.
[1] https://www.windowscentral.com/software-apps/google-ceo-agi-...
[2] https://www.tomshardware.com/tech-industry/artificial-intell...
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
This isn't correct: people want good software and good art, and the current trajectory of how LLMs are used on average in the real world unfortunately run counter to that. This post doesn't offer any forecasts on the hallucination issues of LLMs.
> As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours
This is the first time a number has been given in terms of ChatGPT's cost-per-query and is obviously much lower than the 3 watts still cited by detractors, but there's a lot of asterisks in how that number might be calculated (is watt-hours the right unit of measurement here?).
My attitude towards AI is one of balance not overly dependent, but not completely rejecting either. After all, AI is already playing an important role in many fields, and I believe the future will be a world where humans and AI coexist and collaborate. There are many things humans cannot do, but AI can help us achieve them. We provide the ideas, and AI can turn those ideas into actions, helping us accomplish tasks.
I’ve heard a joke that under Steve Jobs, the number of big initiatives at Apple was limited to the number of execs Steve could shout at in a given day.
I see a similar thing at work — the number of projects a developer can get through isn’t bounded by the lines of code they can churn out in a day. Instead it’s bounded by their appetite for getting shouted at when something goes wrong.
Until you can shout at an LLM, I don’t think they’re going to replace humans in the workplace.
Call me a cynic, but Gary Marcus and Sam Altman are the last people I want to read about AGI and related topics.
Gary has invested heavily in an anti-AI persona, continually forecasting AI Winters despite wave after wave of breakthroughs.
Sam, on the other hand, is not just an AI enthusiast; he speaks in a manner designed to build the brand, influence policy, and continuously boost OpenAI's valuation and consolidate its power. It's akin to asking the Pope whether Catholicism is true.
Of course, there might indeed be significant roadblocks ahead. It’s also possible that OpenAI might outpace its competitors—although, as of now, Gemini 2.5 Pro holds the lead. Nevertheless, whenever we listen to highly biased figures, we should always take their claims with a grain of salt.
Yesterday, I gave ChatGPT links to three recipes and told it to make me a grocery list.
It left off ingredients. The very gentle singularity…
> " I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them."
So when AGI comes, I am curious what the new jobs are?
I see that prompt engineer is one of the jobs created because it's the way to ask a LLM certain tasks, but now AI can do this too.
I'm thinking that any new jobs AI would make, AI would just take them anyway.
Are there new jobs coming from this abundance that is on the horizon?
> It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year
I heard similar things in my college dorm, amid all the hazy smoke.
It’s very difficult to take this stuff seriously. It’s like the initial hype around self driving cars wound up by 1000x. Because we got from 1 to 100 of course we’ll get from 100 to 200 in the same amount of time. Or less! Why would you even question it?
> the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.
Does anyone know if there are well established scaling laws for reasoning models similar to chinchilla scaling. (i.e. is the above claim valid?)
It's like the tenth time openAI invents AGI?
Like a storefront advertising "live your wildest dreams" in pink neon. A slightly obese Mediterranean man with questionable taste tries to get you into his fine establishment. And if you do enter the first thing that meets you is an interior with a little bit too many stains and a smell of cum.
That's the vibe I get whenever Sam goes on his bi-quarterly AGI hype spree.
I'm feeling a bit cheated it's 2025 and I just bought a brand new car that does not drive itself and runs on dead dinosaurs.
Maybe. AI models have continued to scale up at a rapid rate, and have continued to get better at performing ever more impressive tasks. Sure, yes, the OP is breathless corporate-speak, but given how much impressive progress we've seen in AI in just a few years, it would be foolish to dismiss these pronouncements off-hand.
On the other hand, we may need more practical/theoretical breakthroughs to be able to build AI models that are reliable and precise, so they stop making up stuff "whenever they feel like it." Unfortunately, the timing of breakthroughs is not predictable. Maybe it will take months. Maybe it will take a decade. No one knows for sure.
A lot of good software engineering is just tribal knowledge. The people who know know.
Things like when to create an ugly hack because the perfect solution may result in in your existing customers moving over to your competitor. When to remove some tech debt and when to add to tech debt.
When to do a soft delete vs when to do a purge. These things are learnt when a customer shouts at you and you realize that you may be most intelligent kid on the block but it wont really help the customer tonight as the code is already deployed and your production deployment means a maintenance window.
While I prefer "event horizon" over "singularity", part of the reason I blogged about this distinction years ago was that the event horizon always seems to be ahead of you as you fall in to a black hole*.
My blog posts didn't age all that well, and I've learned to be a little more sceptical about the speed of technological change, just as the political events over the intervening years have made me more aware of how fast political realities can change: https://benwheatley.github.io/blog/2016/04/12-00.31.55.html and https://benwheatley.github.io/blog/2022/09/20-18.35.10.html
* at least until the rate of change of curvature gets so high you're spaghetti, you're (approximately) co-moving with the light from your own body. This means that when you cross the event horizon, you still see your own legs, even though the space the light is in is moving towards the singularity faster than the light itself moves through that space: https://youtu.be/4rTv9wvvat8?feature=shared&t=516
> There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.
> If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
It's really cool to hear a public figure seriously talk about self-replicating machines. To me this is the key to unlocking human potential and ending material scarcity.
If you owned a pair of robots that with sufficient spare parts could repair each other and do other useful work you could effectively do anything by using them to do all the things necessary to build copies of themselves.
Once we as a species have exponential growth on that we can do anything. Clean energy, Carbon sequestration, Von Neumann probes, asteroid mining, O'Neill Cylinders, it's all possible.
Will the period between whole classes of jobs going away and a new social contract be short enough so that a Butlerian Jihad doesn't kick off?
Regarding a super intelligence creating new cures: a million plus people die from malaria and AIDS combined each year. We have effective treatments for both, yet the USAID was recently shut down.
I enjoy technology but less and less so each year, because it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on
The "fundamental limiter of human progress" is not energy/intelligence lol. No - surprise, it's people. Bear in mind he builds this "gentle singularity" in the great progressive UAE.
What's the goal of posting this right now? A lot of what's written here seems to be well-trodden ground from the last two years of discussions, is it just to centralize a thesis within one post?
> For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it.
I get the gist of what he is saying, but I really think that most of the "idea guys" who never got farther than an idea will stay that way. Sure, they might spit out a demo or something, but from what I've seen the "idea guys" tend to be the "I want to play business" guys who have read all the top books and refine their Powerpoints but never actually seem to get around to, you know, running a business. I think there is underlying difference there.
I do see AI as a great accelerator. Just as scripting languages suddenly unlocked some designers who could make great websites but couldn't hang with pointers and malloc, I think AI will unlock great idea guys who can make great apps or businesses. But it will be fewer people than you think, because "building the app" is rarely the biggest challenge - focused intent is much harder to come by.
I do think the age of decently robust apps getting shat out like Flash games is going to be fun and chaotic, and I am 100% here for it.
My first thought was who could possibly say with a straight face that we are close to super after the spectacular failure of AI to scale up with more compute.
Oh right, Sam Altman.
If OAI/Sama believes they are taking-off in that AI is building the next model we should see a step change the next model. As they accelerate the gap will grow, we'll see next model whether this is PR/Marketing or if they really are taking off
I can’t seem to reconcile this with the fact that there has not been a significant improvement of the transformers since 8 years ago. None of the AGI components, in addition to those that emerged with hyper scale, have been achieved. Where is the superintelligence going to come from?
The singularity already happened, we are just not aware of it yet. Atleast, thats what I am observing here. Most smart/ai products reduce human to servant of refilling, refuelling, error checking. Who are in control remains the question, money?
> I have no doubt they will feel incredibly important and satisfying to the people doing them.
How rude.
ok ok, i m holding my breath!
"Rich" is a relative term, the existence of the rich requires the existence of the poor, and according to the article the rich will get richer much faster than the poor. There's nothing gentle about this singularity
Reminds me of playing https://apps.apple.com/us/app/artificial-superintelligence/i...
I can definitely imagine a whole swathe of paperwork-based jobs, such as regulatory, quality assurance, report-writing, and all the non-Personnel bits of HR, could become mostly-AI without a lot of problems.
Feels like 1999 all over again. This time, I think it really is different.
Will AI be able to take drive-thru orders after the Singularity?
> and not too concentrated with any person, company, or country.
That’s heresy, with this crowd. Everyone wants to be The Gatekeeper, and get filthy rich.
It’s very hard to trust his words after he’s become the leader of a billion dollar for profit company. I miss the old Sam
> Very quickly we go from being amazed that AI can generate a beautifully-written paragraph
I mean, I suppose Sam loves ChatGPT like his own child, but I would struggle to describe any of its output as 'beautiful'! 'Grating' would be the word that springs to mind, generally.
Do these tech overlords find it really hard to resist the “pitching in” part even after they’ve already deployed millions and billions in PR? Maybe just the itching thumb? Yeah, maybe.
> AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present
And then nothing substantial after this proclamatory hot-take. So let’s just choose to believe le ai propheté.
It’s like quick instructions to PR team style written post (and then asking an LLM to inflate it) from the comforts of a warm and cozy Japanese seat.
Definitely not written by Genini at least. Usually does a better job than this. Well, at least like Zuck he eats his own food that he killed.
Will be looking forward to titles like “The Vehement Duality” &c in the near future.
if it’s gentle, it’s not a singularity
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
"Hey, it'd be a shame if somethin', uh, happened to that nice bit of expertise ya got there, y'know. A darn shame."
When will AI be able to tell us which gods exist and what they really want us to do?
Right before I came to this article, I watched a video about Alpha fold determining the structure of almost all proteins, and the mention of other programs working on other forms of crystal structures( Magnets, superconductors) I am staring to feel the acceleration. I wish I felt more hopeful.
You can fuck right off with your "gentle singularity" until you start sharing your profits with everyone whose work you ripped off, and are continuing to rip off.
It's funny in a cosmic way how YC was once led by this guy.
The OpenAI pitch: I have a solution to all problems, it's like consciousness in humans but for robots and its better...singularity!
Sam you need to touch grass.
> Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.
I bet he wants to be the first to "plug in" and become the first AI enhanced human.
I quite like the blog post although I think about 98% of the comments here are slagging it off. It's maybe a bit optimistic but sometimes it's nice to look on the bright side. It's like society is about to develop electrical equipment of the first time and all people can say is that will never work as well as quickly as it might, won't you think about the manual crank turners that will have to find other jobs and so on.
"superintelligence research company"
Fantastic. I look forward to all these benefits accruing to the upper classes and billionaires, in the same way as globalisation did.
we are not past the event horizon.
This is a text by venture capitalist trying to ensure continuing investment into money-losing product.
“We” are not wondering whether AI can write “a beautifully written novel.” Yes, some people are.
Other people know a novel cannot be written by a machine— because a novel is human by definition, and AI is not human, by definition.
It’s like wondering if a machine can express heartfelt condolences. Certainly a machine can string words together that humans associate with other humans who express heartfelt condolences, but when AI does this they are NOT condolences. Plagiarized emotional phrases do not indicate the presence of feeling, just the presence of bullshit.
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.
Yeah dawg imma need a citation for that. Or maybe by "the world" he means "Silicon Valley oligarchs", who certainly have been "entertaining" all sorts of "new policy ideas" over the past half year.
Pure bullshit. Hype, Hype Hype. Repeat.
This would be a sign of deep delusion if it weren't transparently self-serving.
can't wait for the day this bubble bursts
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
"probably", "if they embrace the new tools". Hard to read anything but contempt for the role of humans in creative endeavors here, he advocates for quantity as a measure of success.
> We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
Not sure if wishful thinking trying to LARP-manifest this future into being, or just more unfalsifiable thinking where we can always be said to be past the event horizon and near to the singularity, given sufficiently underwhelming definitions of "event horizon" and "nearness."
We've somehow had a calm few months in the industry without the daily hyperbole on AGI, superintelligence and the upcoming singularity. After the string of underwhelming incremental updates from all the top LLM publishers I guess they need to start the hype cycle again.
> The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year.
How about affordable housing?
[dead]
[flagged]
[flagged]
[flagged]
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before
No "the world" won't be getting richer. A small subset of individuals will be getting richer.
The "new policy ideas" (presumably to help those who are being f*d by all this) have been there all along. It's just that those with the wealth don't want to consider them. Those people having even more wealth does _not_ make them more likely to consider those ideas.
Honestly this drivel makes me want to puke.
I bet sam used chatgpt to write this. There are some tell tale signs including hyphens.
I don't know if gentle is the right word. Maybe Gently face hugger and chest burster is more apt. It's slowly infecting us and eating us from the inside but it's so gently we don't even care. We just complain on HN while doing nothing about it.
> Scientific progress is the biggest driver of overall progress
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before
Real wages haven’t risen since 1980. Wealth inequality has. Most people have much less political power than they used to as wealth - and thus power - have become concentrated. Today we have smartphones, but also algorithm-driven polarization and a worldwide rise in authoritarian leaders. Depression and anxiety affect roughly 30% of our population.
The rise of wealth inequality and the stagnation of wages corresponds to the collapse of the labor movement under globalization. Without a counterbalancing force from workers, wealth accrues to the business class. Technological advances have improved our lives in some ways but not on balance.
So if we look at people’s well-being, society as whole hasn’t progressed since the 1980s; in many ways it’s gotten worse. Thus the trajectory of progress described in the blog post is make believe. The utopia Altman describes won’t appear. Mass layoffs, if they happen, will further concentrate wealth. AI technology will be used more and more for mass surveillance, algorithmic decision making (that would make Kafka blush), and cost cutting.
What we can realistically expect is lowering of quality of life, an increased shift to precarious work, further concentration of wealth and power, and increasing rates of suffering.
What we need instead of science fiction is to rebuild the labor movement. Otherwise “value creation” and technology’s benefits will continue to accrue to a dwindling fraction of society. And more and more it will be at everyone else’s expense.