As AI has continued to improve quickly, it’s been interesting to watch the sentiment of the tech community get more negative on it. “It’s not very good yet.” “No improvement since GPT-4.”
Objectively, today’s AI is incredibly impressive and valuable. We blew past the Turing test and yet no one seems to marvel at that.
I’d argue and we still have yet to discover the most effective ways to incorporate the existing models into products. We could stop progress now and have compelling product launches for the next few years that change industries. I’m confident customer support will be automated shortly - a previously large industry for human employment.
Is the negative sentiment fear from tech folks because they have a lot to lose? Am I just not understanding something? It feels like I can watch the progress unfold, but yet the community here continues to say nothing is happening.
In my mind, LLMs are lowering the barrier of searching in the same way Google did in the early 2000s. Back then, you had to very specifically tailor your search key words, not use words such as "the," "a," etc. Google eventually managed to turn queries such as "what's the population of Ghana" into ready-made answers.
LLMs do exactly that for more complex queries, with the downside of possible hallucinations. Suddenly, instead of doing a research on the topic, a person looking to become "a programmer" asks ChatGPT to create a syllabus for their situation, and possibly even actually generate the contents of the syllabus. ChatGPT then "searches the internet" and creates the response.
I have gained confidence that LLMs won't be much more (at least in the next couple years) than search engines with the upside of responding to complex queries, and downside of hallucinations. And for that, I find LLMs quite useful.
Ok, well, I guess we're not going to get a proper retrospective for any of the OpenAI stuff for awhile. That's too bad. In the spirit of the post I wish Sam had written, I'll say one thing I learned from watching the show: if you take advice even from your own board, and what they suggested fails, they will still fire you even though it was their advice. So you might as well just always do what you think is right.
This applies to other leadership roles as well.
I used to look forward to his takes. Some of the past posts were genuinely insightful, but now all I hear is the cliched difficult road leading to an AGI whose consequences always seem utterly dire for anyone involved, perhaps except OpenAI.
I still remember being so excited to receive my OpenAI private beta key sometime in 2020. After watching a few videos on developers talking to it, I was incredibly hyped to create something ambitious with it only to quickly become disappointed with its capabilities after trying to wrangle with a bunch of prompts.
So when ChatGPT came out, I thought it was a cool toy with a chat interface skin and nothing more. Before I knew it, AI (and its hype) had invaded a lot of unexpected corners of my life; and as more time passed, with more unexpected and perverse capabilities being discovered, I found it harder and harder to believe in all the utopian visions Sam and others preached.
Hopefully a great super-intelligent god will properly retire me and my family before all our skillsets are automated away.
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.
This goes with the same agenda with the World Economic Forum (WEF) of "Collaboration for the Intelligent Age" [0] which Sam is also attempting to coin this with a similar title (The "Intelligence" / "Intelligent" Age). [1]
It will be no surprise the he will be invited to tell us all about how AGI will bring the utopia of Universal Basic Income (UBI) to everyone and save the world for the "benefit of humanity".
The truth is, "AGI" is a massive scam to raise more money to inevitably replace workers with AI and race all of it to zero, without any alternative for those lost jobs.
[0] https://www.weforum.org/meetings/world-economic-forum-annual...
>We are now confident we know how to build AGI as we have traditionally understood it
But we don't even have good definitions to work with. Does he mean AGI as in "sentience", AGI as in "superintelligence", AGI as in "can do everything textual (text in, text out) a 95th percentile human can do", or "can do everything a human on a computer can do" (closed-loop interaction with compiler, debugging, etc.).
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Wow. Dude has really gone "full retard" on this one. They don't have AGI but they "know" how to build one. Quick, give me a couple trillion dollars.
This post really seems desperate as it tries to touch to the inner FOMO of people. I wonder how much dumb money is still out there.
I’m worried to read this in case I get influenced.
I use LLMs everyday including the o1 model and the hype doesn’t match the reality, which is pretty good but like a maximum 15% increase in productivity. How are you meant to get AGI from that?
summary: we need marketing to prepare for the next seed funding, we burn a billion a month in losses. It's going to be a big bubble!
On Nov 30, 2022 Sam Altman knew exactly how LLMs work. He knew the design of his LLMs were such that they were not - and would never be - sentient. They would never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet, very soon after Nov 30, 2022 Sam Altman was making statements about how important it was for governments worldwide to address the danger ChatGPI posed before it was too late.
He was on the hype train before Nov 30, 2022.
The Nov 30, 2022 announcement was itself part of the hype train.
OpenAI, Google, Microsoft, Meta, Apple, IBM, etc etc have spent - and continue to spend - billions on LLMs. And just like Altman, they know exactly how LLMs work. They know the design of their LLMs are such that they are not - and never will be - sentient. LLMs will never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet they continue moving the hype wagon forward, faster and faster.
Someone is making lots of money. And soon so many more will lose so much more.
That all seems mostly reasonable
Bit surprised about the AGI part and also the agent „join workforce“ comment. I thought it’s their policy to not anthropomorphize ?
Hopefully someday LLMs will be hugely beneficial to society for their ability to identify correlations in data. And, by their very nature, they are good with language and thus helpful in programming contexts.
But LLMs have no understanding of what they write. They do not ponder. Are not curious; do not wonder. Do not think or have thoughts. Do not create or invent. Do not have Aha! moments.
Maybe some day a machine will be capable of these things. Maybe not. But LLMs - by nature of their algorithmic design - never will.
>...when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it.
>...
>We believe in the importance of being world leaders on safety and alignment research
It's interesting to consider the above excerpt, in light of the below excerpt from OpenAI's charter:
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
OpenAI doesn't claim to be leaders in safety and alignment research.
They "believe in the importance" of being leaders in safety and alignment research. For whatever that's worth.
But they do acknowledge themselves as the leading AI company.
Is it really in our interest as a species for the leading AI company to merely "believe in the importance" of leadership in safety and alignment?
Among the "all sorts of reasons" for people to attack the leading AI company, this strikes me as a fairly legitimate one. Just saying.
Also -- I notice that OpenAI seems to be criticized more than leading companies in other industries.
If maintaining your lead isn't in the interest of your stated mission statement... maybe you shouldn't actually be working to maintain that lead?
Did Sam or OpenAI ever publicly respond to Jan Leike's comments when he left? (Former head of alignment) https://threadreaderapp.com/thread/1791498174659715494.html
See also: https://openasteroidimpact.org/
A whole lot of words that didn't say much about specifics of the past or specifics of the future, just pablum and positive spins. It read as if he had an LLM help out (derogatory).
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity."
Claiming superintelligence in this post in this form with how little LLMs are able to consistently be accurately true is beyond wishful thinking, entering magical, though throughout it all is still the stink of Fraud.
Garry Marcus's take:
>We are now confident that we can spin bullshit at unprecedented levels, and get away with it. So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe.
That said I think the negative arguments are overdone and AGI and agents soonish are quite likely.
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.”
Lord I hope the money runs out soon.
Let's play the devil's advocate. And let's say Sam is a conman, any definition of AGI or ASI will never arrive.
What has he to gain from all these? Assuming the bubble will burst in the end. Fortune? He will get his salary over the course of the bubble. I am not entirely sure how he can make money if in the end the money runs out.
Fame and Connections?
While I have very little technical idea about anything LLM, I am well versed in the Foundry and Hardware Manufacturing and Supply Chain. Ever since he asked for a Trillion dollar to build Fabs, and Chips for AI I have a very cynical view on him.
Still not on board with the whole scaling up LLMs -> AGI thesis.
> I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges.
Comic book level villainy. I like the guy!
Naturally no comments on change in governance or profit structure in these reflections.
He comments on the founding of OpenAI as though OpenAI the, currently, capped-profit company and OpenAI the non-profit which today controls it are the same thing. They are not and they are planned to be split, a split that cannot possibly be justifiable under the OpenAI (the non-profit) charter.
As I watch OpenAI's structural development over time, it becomes increasingly clear that the wildly incompetent board of OpenAI had something justifiable in their firing of Sam.
So Sam Altman is looking to keep the grift alive, gotcha
Is this like three inches wide on the screen so it looks longer? Sama padding the essay?
what is the message? I think it's there are people behind this with human and relatable motives and foibles, the consequence of this change is difficult to apprehend, the main tool they have is to be incremental, and it's just hitting its stride.
the comment about understanding what AGI means was compelling. I'd guess it may be something arrestingly simple, and they have the sense to not be meta about it, or to sound it out with freighted and inadequate words- they will just introduce it as it is.
good luck.
> We are now confident we know how to build AGI as we have traditionally understood it.
Yes. Its just going to be another marketing term.
god, what a cunt
I stopped reading after “as we get closer to AGI.”
[flagged]
AI company says AI is the future and to buy now.
[dead]
[flagged]
[dead]
[flagged]
Also a good interview in Bloomberg: https://www.bloomberg.com/features/2025-sam-altman-interview
Boy we have lost trust in our boy big time. I feel the essay is earnest. He's not giving concrete details, but I'd like to give hi my a benefit of doubt.
I think these are the least insightful comments I’ve seen on HN, maybe ever.
Hating from the sidelines is easy. AGI (or whatever semantics you prefer) is simultaneously: 1: One of the most positively influential technologies developed in human history 2: Magnifies the failures of our current governance structures 1,000 fold.
Adaptation is/will be required, and it’s not going to be easy. But the finish line promises a significantly better future for (potentially) all. Sitting here arguing semantics, complaining about technological evolution (surface level insights, I suggest going a level deeper), or making weird anti-altman statements instead of anything substantive is, well, not interesting.
This is an incredibly vague essay. Let me be more explicit: I think this is a clear sign of a bubble. LLMs are very cool technology, but they are not the second coming. It can't do experiments; it doesn't have an imagination; it doesn't have an ethical framework; its not an agent in any human sense.