Moving AI governance forward – openai.com

  • Moving AI governance "forward" - means working on eliminating the competition via regulatory capture, censoring and blackholing any facts they don't like, and preserving the economical status quo with enriching themselves.

    None of this will protect against actual risks like massive job losses without any reskilling, lowering of living standards and widespread censorship.

  • What OpneAI is lobbying is actually a ban, better world wide, on products better then theirs. Just exactly the level they have, accidentally. It's not for profit or anything other all the humankind good, of course.

  • >Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

    How is DALL-E 2 the "industry frontier" of image generation?

  • I for one can't wait until OpenAI is fully crushed by other companies. Their weird combination of singularity/utopia talk plus fearmongering is getting old.

  • >current industry frontier

    >Dalle 2

    I'm sorry OpenAI, but your model is not the frontier; also it's funny that it's the only text-to-image models mentioned, they probably know how better the other models are.

  • Hard pass. No other company in any field had to do so much fear mongering about potential misuse of their tech all the way from original GPT releases & then go on a whirlwind world tour meeting political leaders to talk about their own product. Like startups & companies in each & every country can't get their leaders to talk with them let alone to journalists but somehow Altman is able to waltz right in to every location?

  • If they so scared of AI, they could just stop building it.

  • I hope at least some republican lawmakers aren't too senile to recognise the threat this poses. AI will play a huge role in our futures, and if OpenAI, Google et al. get their way, it'll essentially be illegal to have an AI capable of expressing conservative political views.

  • Ah, the traditional pulling up the ladder behind themselves move. If openai cared about harm they would ask if their current API service is doing harm at this moment, not if somebody else would do the same or more harm as it is profitable.

  • Related discussion over here: https://news.ycombinator.com/item?id=36813194

  • This is shockingly similar to how NCAA colleges and universities handle behavior and conduct violations from players and coaches. They perform an internal investigation and then attempt to dish out a penalty or restriction that appears harsh enough for the governing body (the NCAA) not to take any additional action.

    Also similar to everyone's response when asked: "What do _you_ think your punishment should be?"

  • I guess Big AI is following the Big Pharma playbook - I recently read an article about children being unable to afford Penicillin shots, each costing almost a thousand dollars - which is absolutely infuriating considering any competent chemist can make Penicillin with rudimentary lab equipment, most of the cost being price jacking due to regulatory capture. Probably they are looking to avoid the marginal cost of AI services trending to zero by restricting the supply in a similar way.

  • Voluntary, self-regulatory oversight of one of the most powerful technology breakthroughs in human history? What could go wrong?

  • Question: What is the relation between O̶p̶e̶n̶AI and this website? Isn't Sam Altman also part-owner of YC?

  • It's highly telling nothing is said about the legality of taking the entire sum of human knowledge and using it to train the AI - which already created a huge stink in the generative art community leading companies like Valve to issue a blanket ban on AI art.