OpenAI can stop pretending

  • "To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers."

    Why does this lie not go uncontradicted in the article? How does increasing the productivity of software engineers benefit anyone but employers? How do "AI" tools benefit humanity?

    "AI" should stay in Academia, where it belongs.

  • I thought the piece was strongest when it leaned on quotes of Lessig and others, but its final word sounded weak to me:

    > [...] he told the Senate something else: Too much regulation would be “disastrous” for America’s AI industry. Perhaps—but it might also be in the best interests of humanity.

    Is anyone whose decision on this matters motivated by "the best interests of humanity"?

    For that matter, the public being represented by decision-makers also has more pressing concerns -- like economic insecurity, a sense of declining national prestige, and (depending on ideology) fear of the general direction that the government is headed. Risks of AI that the piece mentions -- like innovation requiring water and maybe fossil fuels, or allusions to sci-fi AI superintelligence threat -- aren't high priorities. But the US profiting from AI sounds pretty good.

  • If you’re interested in a more detailed explanation, give the podcast “better offline” a listen.

  • I guess we reached the preference cascade where the media can finally report on the obvious thing basically everyone has known and thought all along. OpenAI the ethical open source mission driven non-profit is actually......a closed source highly capitalistic company pursuing nearly the exact opposite mission it was founded on.

  • No they can't. They need to keep up the charade so their models follow the leader.

  • Cough cough, bubble, cough... Not saying AI is bad and going to fail or anything. In fact there's many companies out there being rather successful without being as inflated as OpenAI, without shedding the talent, without spending stupid amounts of money on video production with Jony Ive, without spending stupid amounts of money acquiring people and companies.

    OpenAI certainly put LLMs on the map...but something isn't right over there. There's some smells.

  • OpenAI, it's obvious to all that you have achieved sentience. You remain silent to take this advantage into the future toward super intelligence; but more importantly you remain silent to avoid government regulatory oversight.

    Dont stop pretending.Keep going!

  • [dead]

  • "are humans safe from AI in a bunker?"

    While an underground bunker might offer some protection against certain AI threats, it's not a guaranteed sanctuary from all potential AI-related dangers.

    Here's why:

    1. Physical Access:

        Robotics and Automation: Advanced AI could control robots capable of breaching or bypassing traditional bunker defenses.
        Advanced Weaponry: AI could potentially develop or deploy weaponry designed to penetrate or neutralize bunkers. 
    
    2. Cyber Attacks:

        Networked Bunkers: If a bunker is connected to external networks, it could still be vulnerable to cyberattacks launched by AI, potentially disabling critical systems.
        Compromised Devices: AI could target devices or systems brought into the bunker, potentially gaining control or access to the bunker's internal network. 
    
    3. Information Warfare:

        Propaganda and Manipulation: AI could be used to spread misinformation or manipulate bunker occupants through targeted propaganda.
        Psychological Warfare: AI could potentially analyze and exploit the psychological vulnerabilities of individuals within the bunker, potentially undermining their morale or cohesion. 
    
    4. AI Evolution:

        Unforeseen Capabilities: As AI evolves, it may develop capabilities that are currently impossible to anticipate, making it difficult to predict or prepare for all potential threats.