Reddit in general is getting wild. Entire comment sections on clips have half the people 100% convinced it is AI and everyone who cant see it is stupid.
And the other half of comments is precise opposite. Equally convinced that it is real and the other half are morons.
And if you can’t get consensus on what’s even real then who posted it (bot/human) and potential motivations for doing so becomes murky too. Everything just sorta collapses. Can’t tell if the content is real. Can’t tell if the poster is real. Can’t tell if comment authors are real. Might not be all bots but may as well be
As someone who was mistook to be a bot some times, I can relate. One of my creation, was even reported as being done by a GenAI. Not sure if it is insulting or not.
That was quite amusing and really fascinating to read; kudos my friend!
The smartest bots and shills write more convincingly than average people who are in the lucky 10000 (https://xkcd.com/1053/), and reactionary phrases work in part because people who are exposed to them spread them (like idioms and other memes).
Some posts are very likely authentic, because they have a distinct writing style and/or incorporate things that yesterday's LLMs have trouble with (like relevant screenshots); and some posts are clearly inauthentic, e.g. your prior example that had an affiliate link. But in between, there are many posts which could have come from a bot or a human making "small talk" online. And unless you know the person who wrote something, you can never be completely sure: a talented state actor could write a genuine piece of insight (e.g. to build credibility), or tomorrow's LLM (perhaps unreleased but being tested in the wild) may beat today's commonly-held signs that detect and disprove AI.
I never accuse people of being a bot or shill, because I think it's irrelevant. Does the writing provoke thinking or generate insight? Are its claims cited, or backed up by reasonable-sounding logic from claims I already believe? I'm not "separating the text from the writer", I'm deciding that if the writer's text is interesting and convincing, it doesn't matter whether they're a bot or shill; because the intrinsic problem with bots and shills is that their posts are bland, reactionary, and misleading, so the solution is to avoid bland, reactionary, and misleading content. Plus, I already doubt anything I read on the internet; even if the poster is authentic, they could be wrong (for facts) or have different values and preferences (for opinions e.g. perspectives or product recommendations).
I do feel the need to mention: I agree the internet is "faker" today, with more bland and reactionary content, and the main factors are probably LLMs and improved marketing/propaganda. There are still niche places, but they usually don't interest me (probably why they stay niche). I think the solution is "better curation", but how? Ironically, perhaps it will be AI figuring out what is "interesting" to specific people; a thing that enables (or at least clears an obstacle to) infinite "slop" generation, will be the thing that protects us from uninteresting and upsetting media.
Please dont do a substack. This website is full of nazi. Your blog is fine.
Ah the old "make a bot post, then write about the bot post, then get called out as a bot, then make another post talking about getting 'falsely' accused of being a bot" play.
It's all so obvious!
/s
Big recent thread https://news.ycombinator.com/item?id=43672139