The original piece itself was flawed. Like many if not most tech companies, reporters or other strangers don’t get unfiltered access to the company. There’s reasons why (unpublished work that may be reported in a bad light).
OpenAI has made huge initiatives in bringing in diversity and really opening their work (deepmind rarely ever does) and I think that profiting in the way they set it up only ensures they can do bigger and better research.
Transcript here: https://slate.com/transcripts/cE5Ia2t1d3k2d3hhNlV3dG1xWlMzNX...
I had a question regarding the following passage:
>"So there were two main theories that came out of this initial founding of the field. One theory was humans are intelligent because we can learn. So if we can replicate the ability to learn in machines, then we can create machines that have human intelligence. And the other theory was humans are intelligent because we have a lot of knowledge. So if we can encode all of our knowledge into machines, then it will have human intelligence. And so these two different directions have kind of defined the entire trajectory of the field. Almost everything that we hear today is actually from this learning branch and it’s called machine learning or deep learning more recently."
Is there still development of the other branch, the "encode all of our knowledge into machines, then it will have human intelligence" branch then? If so what is the branch of AI called then?
From the transcript:
>"Pursuing a G.I., particularly with a long term view, was the central mission of open A.I.. And yeah, there was the traditional Silicon Valley talk of changing the world, but also this sense that if HDI was done wrong, it could have very scary consequences."
If the concern was truly avoiding AGI was done wrong which presumably includes its development being in the hands of a select few tech giants. Wouldn't it be better to simply wind the operation down rather take money one of those few tech giants leading in AI development then running a company with motives that are odds with each other?
Just off the top of my head doesn't it seem that Microsoft with its new billion dollar investment now stand to benefit from that first billion dollars of investment made to the non-profit OpenAI more so than anybody else?
From what I’m able to summarize:
The reporter was invited to do a piece on them, and while visiting had trouble reconciling their secrecy with their ethos of openness. She was not allowed to interact with the actual researchers where they were doing their work, and her lunch was pushed out of the building so she couldn’t overhear their all-hands meeting. (My take is that their openness extended to the curated fruits of research, but their process itself was guarded from any communication channel they couldn't control i.e. the reporter).
This seems related to the second part, where they discuss the pressures toward profit, from strings attached to corporate investments, which they suggest would be different under traditional long-term gov investments. And they talked about the paradox of adding a for-profit branch to a non-profit org, without resolution.
I’m a bit unsettled recently when listening to podcasts and stories like this that seem to end on a note of “shrug, capitalism, isn’t this an interesting problem?”. I’d be more encouraged to see folks talking about post–game-theoretic social structures that can categorically solve for these issues, that can allow us to transition out of capitalistic dynamics rather than trying to fight them in order to get work done. This seems to be the rallying call of the nebulous ideas behind “game~b”. Wondering if anyone here has been seeing that yet.
Why does this wave of coverage speak about OpenAI as if a postmortem?
Maybe it’s just me but companies are not public benefit enterprises, even if structured in some way as a not for profit.
This wave of coverage and the dialog around it seems to come from the view that OpenAI somehow owes the world something. When in fact it only owes its stakeholders none of whom are reporters.
Is it me or the player does not have volume control nor playback speed?
OpenAI is to X as DeepMind is to Google.
So it seems that OpenAI realized they needed more compute power than they could afford, so they started a for-profit arm that could take outside investment from Microsoft to cover those costs.
This piece suggests that they have since focused (at least partially) on creating profitable products/services, because they need to show Microsoft that this investment was worthwhile.
Does anyone with more context know if this is accurate, and if so, why they changed their approach/focus? What are they working on and is GAI still a goal?