Let's shoot US innovation and leadership in the foot by establishing random limits on foundation model research.
According to EO's guidelines on commpute, something like GPT4 probably falls under reporting guidelines. Also, in the last 10 years GPU compute capabilities grew 1000x. What will be happening even 2 or 5 years from now?
Edit: yes, regulations are necessary but we should regulate applications of AI, not fundamental research in it.
Insane to me that, given a multi-year lead in tech, capability, talent, the USA is shooting itself in the foot re: innovation around AI.
Talk about snatching defeat from the jaws of victory... damn
Great way to understand how all this works is watching All-In Summit: Bill Gurley presents 2,851 Miles[1]. Basically, regulate your competition into the ground.
> Specifically, USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.
I have been afraid of over-regulation of AI but standards and testing environments don't sound so bad.
It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.
Regulatory capture in action right before our eyes. Fears of Skynet are going to lead us to a cyberpunk dystopia where only large corporations have any access to powerful AI. What a bizarre time to be alive
William Gibson had the "Turing heat" in seminal cyberpunk novel "Neuromancer". Here's the real-life beginning of just such an organization.
Thou shalt not make a machine in the likeness of a human mind
I guess we're heading for spice thenI'm unsure what limits will do. Selling weapons and explosives is regulated, but it doesn't stop the government from doing it. So by limiting it, we're only limiting the people?
Cool. Great job guys. Now do one for CONSUMER DATA PROTECTION, RIGHTS AND PRIVACY. I WILL EVEN LET YOU COME UP WITH A FUNNY LITTLE 3 LETTER AGENCY NAME FOR IT. I DO NOT CARE.
"Despite the increasing complexity and capabilities of machine learning models, they still lack what is commonly understood as "agency." They don't have desires, intentions, or the ability to form goals. They operate under a fixed set of rules or algorithms and don't "want" anything.
Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.
Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.
In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."
The above is from the horse's mouth (ChatGPT4)
My commentary:
We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.
I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.
What is my point?
We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).
Most of us care if the drugs we're taking are properly tested and won't have adverse side effects or at least that the adverse side effects are known so the risk/reward may be calculated. Most of us care if the cars we drive are safe for us and don't have any hidden flaws that may fatally emerge. Same goes for food and drinks I assume. Actually, it's probably easier to find areas with beneficial regulations than functionally no regulations at all. Why is it that in this case people are willing to abandon caution and just dive in without looking ?
I must be missing something, as I'm not seeing the information in the linked press release that is fueling the specific commentary here around what the government intends to do, as all this seems to say is that they plan to create standards and provide testing environments. I'm sure there is more to it, I just didn't see where any of those facts were posted.
So I'm assuming some of you have seen more details - can someone share where they can be found?
Alarming how many people think that the development of AI should have…no government oversight? Are none of you familiar with history?
For those in the know, is this a bipartisan position? Any chance of seeing rules like this one "over-ruled" (don't know the exact technical term) in case of different politicians coming to power in the US?
That (and the rest of the regulatory package) looks like a framework to handicap AI technology when existing laws can handle the existing problems.
It can only help existing companies to stifle competition and guarantee revenue.
Just the people I wanted to regulate cutting edge niche technology.
I believe this could possibly be about denial of technological advantages for their competitors and potential threats to their control of the markets.
What a fucking joke. I am voting libertarian if i bother to vote at all. They're against AI regulation.
Recent and related:
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence - https://news.ycombinator.com/item?id=38067314 - Oct 2023 (334 comments)