One might imagine that the layoff-based business model for AI could eventually backfire once people discover how bad the output is, but the incentives seem aligned for a society with heaps of low-quality, AI-generated garbage and a major unemployment/underemployment.
No one owes anyone a job. Systems like that have been tried and they don’t work.
This is going to be a bumpy transition but at the other end the jobs that survive will be creative and fulfilling - for example, people automating work, every day, so (barring corner cases) no one has to do that task again.
Putting AI in and letting it do a bad job is a great way for AI to be in there eventually doing a good job. Humans may not be needed to do the work but they will be needed to tend to the systems and automations driving the AI - and/or to the agents driving the AI. They’ll be needed to improve what the AI is doing, handle corner cases, and add new use cases. And they’ll fill in the gaps - until those can be automated too.
Once everyone’s doing that, it’ll be an automation race. The more humans a company has the more capability it will have to automate more work. Then we’ll be back to full steam hiring, but the skill sets will be very different. At places like Duolingo, job titles like “AI automation specialist” will replace “translator”.
It’s going to be okay, we will get through this!
With the current speed of AI progress. We are probably 1 or 2 years away from any generic AI being able to teach languages better than Duolingo.
I wonder what happens when the AI starts to hallucinate? https://www.sify.com/ai-analytics/the-hilarious-and-horrifyi...
Any time I see AI "art" I assume the worst: https://newsocialist.org.uk/transmissions/ai-the-new-aesthet...
[dead]
I think it’s more like AI empowers and 3x’s the creatives that learn to use it. In all fields where highly intelligent auto-complete is useful it replaces 2/3rds of who you need to hire. The key is to learn to use the tools. As it has always been with new tools like computers etc.