Not sure about creativity. But.
Previously, I had to deal with "Junior python" or "Junior bash" crap at $work.
Finding the dangerous bugs, was measured in seconds. Helping the person to be better in the future, used to work.
Now, all the company is requesting code to ChatGPT. Hey, please, use my script.
I have to deal with "looks fine at first/quick view" code, that needs deep analysis to understand what it's trying to do, why, and where are the (100% sure) hidden "break production" kind of failures.
It's more like "were is waldo"... you know that there will be at least one or two things really really wrong, always, always one or two (or more) catastrophic details, but that they are hidden below something "apparently nice".
And what is worse, all effort to point and fix issues, is lost. Or repeated again and again.
I apologize, but as a senior sysadmin/oncall model, I cannot run your chatgpt code, until you understand how things work.
I am another AI pessimist. Can I please ask the optimists to list the good things LLMs can do for humanity as a whole?
I don't mean banal stuff like Copilot, which is a double-edged sword that might be used against junior developers. I mean world-changing benefits, one step closer to the techno-utopia.
Because on paper, the net benefits vs net negatives, for me and other AI pessimists like the author, are not worth the amount of spam, customer service bollocks and lost jobs LLM will cause to basically make mega-corporations richer.
So please tell me, what will LLMs ever do for us?
I wonder how many people would just stop using the internet (or use it in a very limited fashion) because of the explosion of LLM generated content.
On a similar note, I watch little TV because of the exact same reason, because I have little control over what I want to see and most of the interesting stuff that I may want to watch may be spread across different channels and broadcasted at nearly the same time. Perhaps I should take the same step towards the internet, given that itâs becoming filled with junk that I donât want.
The rush to just shit on everything is so tiresome.
Why write this article other than to to smugly say "told you so" in the cases where you turn out right. It is a zero risk take.
Looking at the advances in AI (Chess, Go, Protein Folding, MidJourney, ChatGPT) and your takeaway being "Humans will use this in bad ways" shows a ferocious lack of imagination.
I notice a desperate, but failing, attempt to lump the advances in AI to the same pool as Crypto greed because that was the smug naysayers nirvana.
> Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you
When I read things like this it makes me think the author hasn't used ChatGPT in their job yet.
Here is a really simple example of how I used ChatGPT this afternoon that saved me, I would estimate, about 2 hours of work:
I had 2 CSV files, with different formats but which (supposedly) had the same functional information in them.
I had a very complicated BigQuery SQL statement that worked on the first file format by importing it as a blob to a table then combining it with a bunch of other CSV files. I wanted to know how much I might need to change my query if I started using the 2nd CSV file (which takes much less time to export from the system that produces it).
The query of course has a big complicated SELECT statement, but also several common table expressions and joins, some of which use columns from the CSV file I was looking to replace.
So I gave ChatGPT the 2 header rows, and the big complicated query. I asked it to tell me likely mapping between the 2 CSV files for similar columns, and to give me a list of columns that appeared in one but not the other. I asked it to mark with an exclamation mark those columns which appeared in the query.
It got some of the things wrong but because I'm pretty familiar with the query and the files I was able to pick up on those and it was much, much easier to browse the output and pick out the errors than it was to break down the query and do all that analysis from scratch.
The whole process using ChatGPT took me about 15 minutes.
And I have wins like that I would say about once per day. I mean it: I'm saving probably about 2 hours work per day by using ChatGPT on average, on tasks just like this.
Now multiply that by all the shit that people are doing all the time and think about all the needs that will get met as a result of this increase in productivity that are not currently being met, and you have an idea of why AI is fucking awesome, ESPECIALLY given the fact that we need a decreasing working population to support an increasing retired population.
Iâm in my late fourtiesâ and have witnessed only a handful of transformative technologies in my time. Nothing in the last 15 years has given me that âtingly feelingâ of excitement â you know, that feeling that weâre on the cusp of something transformative â than the recent progress in AI.
While the new AI frontier might be led by prohibitively expensive (and closed) large language models, weâre also seeing great grass-roots progress at a smaller scale with modest models trained by the developer community. I trained a baby GPT the other day using llama2.c for my own use cases.
Itâs Linux vs Sun/Unix all over again.
I have to disagree. Some things will get crappier, and some things will improve. Yes there's hype, but whether it's the same as crypto hype depends on whether there's actually anything useful behind the hype, and unlike crypto, I think it's pretty guaranteed there's something useful here (purely by getting value out of ChatGPT, infinitely more value than I ever got from anything crypto related).
> Finding and setting up an appointment with a therapist can be difficult for a lot of people â itâs okay for it to feel hard.
Uff, what about crappy therapists? If an AI bot tells you to kill yourself, it's a pretty crappy AI bot. But there are a lot of crappy licensed therapists too. There's also a lot of crappy articles you can find on Google. The world is full of crappy resources. AI can and most likely will be used in all sorts of medical use cases.
Like social media ruined the world despite it's initial promises, like crypto ended up with grifters of all kinds, AI will follow.
Yes, there are some narrow applications that will be GOOD. But will the good outweigh the bad? No, not at all. It never has.
We'd all be happier as a society if we went back in time.
I'm an age-of-AI optimist. Not in the sense that AI will solve all problems, but rather in the opposite. The allure in the promise of so much power, so much profit, will be irresistible to those organizations that are already too big to fail and have the means to pursue the shimmer. But like this article articulates, it'll exposes even more dysfunctions than we're accustomed to. Within the gaps though, there will be opportunities. Particularly for those who modestly want to make a difference in a given community, without necessarily the ambitions to change the wordâ˘. The world doesn't get "better" or "worse", it's an ongoing and never-ending experiment. We don't discover the "right" way to do anything. We just try stuff and when it's annoying enough, we self-correct. AI has so much potential for annoyance that we should just rejoice in the resulting opportunities.
Itâs pretty astonishing how rapidly everyone in the world went from âsky netâ to laughable garbage in a matter of a few months.
I think that much of what is addressed here will not entirely be from AI, but may in the future be misattributed to it because the curves appear to line up. Don't forget that wage increases have been stagnant for a long time, and that inflation is always growing - we never see deflation in the so-called 'good times' [+].
The climate change movement for example is ultimately a movement to reduce the resource usage of the working class. Reduced resource usage should in theory lead to a reduction in economic growth - but it hasn't. This is because they simply attained their growth through other means - reduced salary (adjusted for inflation), larger tax, shrink-flation (selling something smaller at the same price), etc. They will always post record profits to appease investors, and that growth will directly come from your pockets.
AI is probably the only thing slowing this down, and it likely is a bubble. When it bursts, do you think these companies will go back to employing humans for customer services? Like hell they will, you just won't get any customer support at all. You may say "fine, I'll take my money elsewhere", but you'll find yourself picking the lesser of evils [++]. Anybody who tries to offer human interaction will simply not be competitive on price, and people are relatively poorer than they were - so they have no choice. It's not as if you can go without water, gas, electric, phone, phone provider, ISP, etc.
[+] The coming economic recession will also not be enough to reset this trend, and there is no political will to address it.
[++] The government may mandate that these companies have human operators, but it won't work, they'll just maliciously comply. One human operator, a call queue of thousands of people, "our lines are unusually busy at the moment", outsource the humans to the current poorest Country and give them zero power to deal with customer queries, etc. It would be exceptionally difficult to prove they are not providing a good service, and at worst they fine them - which might still work out cheaper than dealing with customer queries.
I agree with the message completely. The facade is new, but it can be very similar to the industrial revolution, and how that upset the order of the then-current times. If we want happier, more well-adjusted people, a better functioning world, we will need a better functioning system, of which AI can be a part as much as any other machine already is. No technology will bring us there however. If it's going to happen, it will happen because people will bring it to life.
Until that, things are just going to be as-is. Sometimes up, sometimes down, overall upwards hopefully, and sometimes advances will upset the status quo. And hopefully no tech will solidify the absurd rich-poor divide permanently.
I donât fully get this meme. AI isnât dangerous because (litany of terrible effects goes here). This is the tech equivalent of âignore global warming; bad weather is dangerous nowâ. Every problem in scope now was science fiction three years ago. To say confidently that an intelligent computer virus is off the table is already presumptuous but to offer a litany of other AI dangers as your evidence is just weird.
Strong points, but I disagree with the timeline a little bit.
> A reduction in the labor force for skilled creative work
I agree and disagree with this. Stable diffusion is art, but creating the art is still within the realm of artists. Also, they'll still need copyediting, refining, etc. I think creatives will transfer or complement their skills with this stuff, like some are already doing. (Example: https://m.youtube.com/watch?v=VGa1imApfdg)
I also very highly doubt that fine art will ever be 100% AI. Uniqueness drives their value.
> The complete elimination of humans in customer-support roles
Definitely not. Human customer service is key for achieving high eNPS scores. People will always want to talk to other people, even if IVR and chat can address their needs.
> More convincing spam and phishing content, more scalable scams
Definitely, but it is well documented that the most common types of scams are made to be deliberately "off" to find easy marks more quickly.
> SEO hacking content farms dominating search results > Book farms (both eBooks and paper) flooding the market
Both of these have been happening for many years. OpenAI will make it easier to stand up boilerplate hello-world starters though (as OP called out). I suppose Google will downrank sites like this to prevent incentivizing this.
> AI-generated content overwhelming social media Widespread propaganda and astroturfing, both in politics and advertising
This is the one thing I'm actually concerned about. I hope that Reddit doesn't become people talking to other people via ChatGPT assistants. That would be a cultural net loss.
I agree with the sentiment overall.
There will absolutely be some great benefits provided by LLMs and the like. Alexa type devices that are more useful than a light switch. Auto spelling correction that actually works most of the time. Maybe a microwave oven that just has one âHeat this upâ button.
But I think the beneficial use cases will be a fraction of the overall use cases.
These technologies are going to be far more effective at enshitifying our world. Spam, scams, replacing artists and knowledge workers with tools that can produce âgood enoughâ output⌠and yeah, military capabilities thatâll allow humans to kill more humans more cost effectively.
Iâm looking forward to the good stuff, itâll be neat. But absolutely dreading the wave of horribleness thatâll inevitably come of this.
> AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the worldâs digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the AI data monster.
Again, I've said this same thing months and yet the AI bros continue to deflect with more nonsense to justify burning the planet with their snake-oil garbage.
Drews points still stand and the Deep Learning industry has no methods of efficient methods of training, fine-tuning and inference and continues to burn down the planet no matter the amount of greenwashing they continue to project.
> Contrary to the AI doomerâs expectations, the world isnât going to go down in flames any faster thanks to AI. Contemporary advances in machine learning arenât really getting us any closer to AGI, and [...] What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
This type of reasoning is really getting on my nerves lately.
Predicting the future is hard, yeah. But your predictions don't become systematically more accurate just by tackling "boring" and "capitalism" to them.
A lot of technologies can change our societies in emergent, non-boring ways. Climate change is an emergent effect of fossil fuel usage that you wouldn't predict by just looking at 19th century factories and imaging how they would evolve with "boring capitalism". The internet is extremely non-boring and has had profound effects on our society. Nuclear mutually-assured destruction is an extremely non-boring existential threat.
It could be that the dangers of AI is from the military, or the police, or terrorists, or from corporations seeking to replace labor, other conventional threats we already have a reference frame for, yes. Or it could be a completely novel from of disaster, like the equivalent of a school shooter getting AlphaFold 8 to make a novel virus that kills 70% of the population before we even realize there's a pandemic going on. Just because this isn't something we're used to doesn't mean it's fundamentally unlikely to happen.
As for generative AI: "mimicry is always sinister" (Friendship's Death (1987), Peter Wollen).
A one sided argument with sweeping generalities and something about minorities being killed in the process. Got it.
I think this article makes sense when analyzing the current technology of AI. But it doesnât make sense if AI continues to improve. I honestly believe AI will lead to a singularity in the sense that the future of AI cannot be defined and is currently unknowable
I think people are just going to spend less time on devices.
All in all itâll probably end up being a net-positive, although itâs a shame that it had to happen in exactly the way. The dawn of the internet was one of hope and optimism and the potential value that it held was an ocean compared to the eventual drops that it was mortgaged in pursuit thereof.
Search is becoming useless. People are becoming inoculated to social and its viral effect will slowly wane. People after exposure to all this value-less capitalism will eventually wise up, because thatâs what makes sense, and will be left for us in terms of value will be the original oldies but goodies that we started with: Wikipedia, YouTube maybe, personal blogs, and commerce.
For many people this has already started. I donât care about going online so much, and Iâm much more interested in my community and whatâs happening around me. My friends and I all use social but more as a tool and itâs increasingly becoming more local. When I meet younger people itâs even more extreme. Theyâre so cynical about tech that Iâm convinced theyâre going to usher in 3rd spaces and better urban planning and the likes when they grow up.
Iâm not saying weâll abandon tech just that weâll only engage when thereâs a legitimate value proposition. Ultimately thatâs why thereâs so much nonsense anyways, because it is legitimately hard to create actual value. On the long view though only value survives. I didnât even mention âAIâ but that will probably just hasten this process from a content perspective. In the future itâll be around but weâll just endure it, but weâll also seek out meaningful interactions whenever we can.
I'm not sure if anyone should take the guy seriously. For a person who has "I don't want to talk about AI" in his profile, and banning everyone who dares to have a different opinion, he sure talks about AI a lot.
> In case you need to hear it: do not (TW: suicide) seek out OpenAIâs services to help with your depression. Finding and setting up an appointment with a therapist can be difficult for a lot of people â itâs okay for it to feel hard. Talk to your friends and ask them to help you find the right care for your needs.
In US healthcare including therapists are quite expensive.
Back in the day I was an indie dev and that was really hard on me. In my lows I thought Iâd seek a therapist but at $500/half hour appointment that felt like a gut punch. Didnât have any insurance then.
ChatGPT is instantly available and free. Yes, itâs not perfect but itâs better than nothing.
For a large part of US, mental therapists are not really accessible when they most need it.
> Flame bait
I'll take it.
> ChatGPT is the new techno-atheist's substitute for God
Not really, no ~~true~~ AInotKillingeveryoneIst says that ChatGPT (or GPT-like) is ASI. Please stop beating this particular strawman.
This part seems sketchy: âthe long-prophesied singularity. The technology is nowhere near this level, a fact well-known by expertsâ
Which experts? Yann? Not sure he counts.
Just like in Rifters trilogy by Peter Watts, Internet will become unusable thanks to AI-driven spam, phishing, SEO, and other junk content.
> SEO hacking content farms dominating search results
It seems like LLMs are destroying traditional search engines (Google) much faster than they are enabling new ones (Bing + GPT).
Are we going to enter a dark age of search where the signal is drowned out by the noise for a few years? SEO blogspam was bad enough when humans had to write it, now it's becoming impossible to avoid.
This is basically MOLOCH.
We can't stop ourselves from 'crappifying' ourselves.
We are driven by local min/max in society that we can't break free from, until the system breaks.
Moloch https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Past post on 'enshittification' from Cory Doctorow https://news.ycombinator.com/item?id=36611245
>...AI companies will continue to generate waste and CO2 emissions at a huge scale...
Oh come on.
>AI is defined by aggressive capitalism
You could have said that for almost all tech improvements in history - electricity, medicine, radio, cars, trains, plumbing etc. Capitalism as in people selling stuff for money is just how things get done. At least to begin with.
This still feels like it's missing the point most doomer technologists do even though it calls them out for it -- The world will be changed by AI, just like the internet, and, just like the internet, it will be full of problems, but problems we mostly can't envision or see.
The 'trump card' for all the AI negativity is education. Think of the 590 million Indian kids that live in poverty, for example. If they can get access to a computer and the internet, they will have access to on demand 24/7 first class education. They can even ask questions like they could of a real teacher.
The boon to human productivity and possibility for less suffering in our Capitalist earth can't possibly be outweighed by some boogey-man negatives which will probably never materialize anyway.
Maybe what we need is a hybrid communist/capitalist system. Governments should nationalize public stock markets and all large companies with a market cap above a certain amount.
> What will happen to AI is boring old capitalism.
I see a lot of people want to blame capitalism, but look at any other system, and ultimately they all fail due to human greed. The only way to make capitalism work correctly is with regulation, because once monopolies and collusions are reached, the natural incentives disappear (i.e. the lowest cost service that delivers what the consumer values).
> Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
Agreed. You will earn less money (relative to cost of living), tax will increase, but yet people will still pretend your quality of life has increased - but they haven't. Many services you now can't reach a human - at all. Emails have disappeared, phone lines have disappeared - I now have to waste 5 minutes speaking to a chat bot that I know cannot solve my issue for it to maybe allow me to type text to what it claims to be a human.
> LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you.
In a sense, most neural networks can be modelled as some form of Markov Model. What's becoming more obvious is that the structures of these models is super important, and there is still a lot to be learned.
> Self-driving cars might show up Any Day Nowâ˘, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.
Cars are a decentralised transport (as much as a transport system can be), whereas a train is a centralised transport system. The internet is also a transport system, but with packets instead of people, and this has had great success with a mixture of centralised and decentralised transport mechanisms.
The biggest problem with trains is that you create a single point of failure and an unnatural monopoly. Your bandwidth is also heavily reduced due to safety considerations (you want to travel fast over long distances, but need to increase the safety margin to do so). Unlike cars or internet packets, you can't divert a train. One can imagine a new protest group "just stop energy" (instead of "just stop oil") quite trivially bringing an entire Country to a halt by placing cars on all of the tracks.
> AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the worldâs digital infrastructure, and feed their hoard into GPU farms to generate their models.
Interesting to see that none of the climate activists so far have gone for clear winners like crypto mining, or AI training. Instead they would rather keep making the life of the every-day person miserable, as if it isn't miserable enough already.
> You will never trust another product review.
You find that people pay for reviews anyway. Somebody I know gets sent Amazon products to review, and they get to keep the products. The more positive reviews you give, the more you get selected for future reviews. The only way around this is reputation - I find somebody you trust who has reviewed a product. It's why Linus Tech Tips (LTT) and the recent review scandal was important - they have a reputation and it does inform consumers about expensive computing equipment investments.
Socialists really are a dreary miserable lot, arenât they.
I told chatGPT to replace reference to AI with references to computers. The arguments seem just as valid (and wrong). Here is a snippet.
"Of course, computers do present a threat of violence, but as Randall points out, itâs not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which arenât going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions â it can put you in jail or keep you there. Police are using computers for facial recognition and âpredictive policingâ. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.
Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I donât really like it."
The rest of the article.
There is a computer bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by computers. But it will probably be crappier, not better.
Contrary to the doomerâs expectations, the world isnât going to go down in flames any faster thanks to computers. Contemporary advances in computing arenât really getting us any closer to AGI (Artificial General Intelligence), and as Randall Monroe pointed out back in 2018:
A panel from the webcomic âxkcdâ showing a timeline from now into the distant future, dividing the timeline into the periods between âcomputers become advanced enough to control unstoppable swarms of robotsâ and âcomputers become self-aware and rebel against human controlâ. The period from self-awareness to the indefinite future is labelled âthe part lots of people seem to worry aboutâ; Randall is instead worried about the part between these two epochs.
What will happen to computers is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. Language models are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and computers will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Nowâ˘, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.
The biggest lasting changes from computers will be more like the following:
- A reduction in the labor force for skilled creative work - The complete elimination of humans in customer-support roles - More convincing spam and phishing content, more scalable scams - SEO hacking content farms dominating search results - Book farms (both eBooks and paper) flooding the market - Computer-generated content overwhelming social media - Widespread propaganda and astroturfing, both in politics and advertising
Computer companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the worldâs digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the data monster.
You will never trust another product review. You will never speak to a human being at your ISP again. Vapid, pithy media will fill the digital world around you. Technology built for engagement farms â those computer-edited videos with the grating machine voice youâve seen on your feeds lately â will be white-labeled and used to push products and ideologies at a massive scale with a minimum cost from social media accounts which are populated with computer content, cultivate an audience, and sold in bulk and in good standing with the Algorithm.
All of these things are already happening and will continue to get worse. The future of media is a soulless, vapid regurgitation of all media that came before the computer epoch, and the fate of all new creative media is to be subsumed into the roiling pile of math.
This will be incredibly profitable for the computer barons, and to secure their investment they are deploying an immense, expensive, world-wide propaganda campaign. To the public, the present-day and potential future capabilities of the technology are played up in breathless promises of ridiculous possibility. In closed-room meetings, much more realistic promises are made of cutting payroll budgets in half.
The propaganda also leans into the mystical sci-fi computer canon, the threat of smart computers with world-ending power, the forbidden allure of a new Manhattan project and all of its consequences, the long-prophesied singularity. The technology is nowhere near this level, a fact well-known by experts and the barons themselves, but the illusion is maintained in the interests of lobbying lawmakers to help the barons erect a moat around their new industry.
Of course, computers do present a threat of violence, but as Randall points out, itâs not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which arenât going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions â it can put you in jail or keep you there. Police are using computers for facial recognition and âpredictive policingâ. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.
Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I donât really like it.
Drew DeVault confirmed Urbanist Bro
Completely unserious comment but I enjoy the Alan Fisher reference, it's funny seeing my online spheres intersect.
EDIT: serious comment.
The whole post is a bit of a doomer one, the thing is in the maximalist bad world that Drew DeVault poses, human interaction (customer support, human written articles and opinions) becomes a premium, meaning the pendulum will swing as people realise the mistake. A lot of people will hurt or even die in the medium term which is true but the world he posits seems one that leads to an unstable maximum.
As with much of this author's content this is a strong opinion that lacks nuance, but I basically agree with the fundamental assertion: that the lasting impact of this AI bubble will be to further centralise power, taking it away from workers.
My hope is that a desire for authenticity prevents this from happening â whether that's a strong bias towards human content creators, towards speaking to a human on the phone for customer support (already something companies try to win customers on), or even winning customers on well-paid humans cooking their food for them (something that seems to be increasing).
Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on. This may even get worse over time if, say, AI hits education and provides a worse quality education, but that's probably no different to what we already have with public school funding issues in the US/UK and many other countries.