I'd like more info as well and am actively seeking. So far I've started an AI company and it's looking like the business will focus on custom AI development for companies with the primary focus being ethical implementation of the AI. Creating new jobs and optimizing the current ones rather than outright replacing people with AI which could be comically disastrous anyway and rightfully so. People need to work and I don't necessarily mean just for money or to survive but because work is deeply ingrained into our very being whether we want to work or not. I'm not sure how much of a mark my company will make but I'm hoping that it helps push this tech into a better direction than a tech dystopia ran by either corrupt governments, corrupt corporations, or (most likely) a little of both.
> effects on poorer countries
For what it's worth, I would predict these effects to be dramatically positive: because LLMs basically immediately counter the effects of being a non-native speaker of your business language, and because the relative change in economic productivity is far greater than the relative change an already-trained-and-fluent worker gets.
All of which is to say, it is reasonable to be concerned about political and economic change, but if your starting position is one of caring more about global solidarity than job loss in the richest countries, I think it's more likely that your concern should be in the direction of "how do I make sure this stays legal and cheaply available everywhere?", rather than the direction of regulation.
> Don't get me wrong, I know regulations are needed
Open calls for regulation are not productive. Regulation should be a last resort, not a knee-jerk response. What do you think needs regulation and more importantly, how would you effectively enforce these regulations. A law is policy + enforcement after all, and a poorly crafted AI regulation will achieve nothing except shifting the balance of power to entrenched groups.