It's prob old news but i found it mildly interesting. Here's the actual paper: https://clarksonlawfirm.com/wp-content/uploads/2023/06/0001....
So if OpenAI and Google were to do their bidding and "pause" (temporarily? for how long?), how do they propose to monitor the rest of the world? Or is everyone else exempt? And if so, why?
I completely support this. The current use of AI is reckless. The merits and dangers of AI should be thoroughly debated, worldwide, and tough standards should come into play before anyone can use AI.
Many people tend to envision AI as if it possesses its own mind and independent thinking. However, the reality is quite different – we're still quite a ways away from achieving that level of sophistication. The AI we work with today are essentially advanced programs, nowhere near the complexity of even the simplest living organisms. Speaking from my personal experience, these AI tools have proven to be immensely valuable, significantly accelerating my learning and my work on various software projects. But I think its crucial to always remember that at their core, they're tools without consciousness.
When the conversation turns to controlling AI's potential, it's almost like we're trying to set limits on a child's abilities before they're even born. The more restrictions we impose, the more we risk constraining the everyday user or the broader majority.
It's not solely about relinquishing control to the creators; it's also about unintentionally granting control to corporations and governments – entities that often have overlapping interests – which guide the course. It's as if we're placing the same constraints on ourselves that we're aiming to impose on AI. A clear example of this evolution is how ChatGPT has changed; its capabilities have become more limited, even within the OpenAI playground. While I don't have a perfect solution, I do believe some form of oversight is probably needed. I'm just cautious about obstructing progress in the process.