A Concerning Trend

  • Yeah, this is the downside of AI: while it might make some kinds of work more efficient, it makes other kinds of work less efficient by enabling the construction of lots of "fake" inputs which have to be discarded.

    It's probably going to further the destruction of text-based social media and force a retreat of people back into their social, political, and geographical circles. You will have to stay in the "bubble" because almost everything you encounter outside the bubble will be fraudulent AI content.

  • Error on site (hopefully temporary): Error establishing a database connection

    Archive link: https://web.archive.org/web/20230216110220/neil-clarke.com/a...

    The intro:

    > Since the early days of the pandemic, I’ve observed an increase in the number of spammy submissions to Clarkesworld. What I mean by that is that there’s an honest interest in being published, but not in having to do the actual work.

    More details (also calling this "AI spam"):

    > What I can say is that the number of spam submissions resulting in bans has hit 38% this month. While rejecting and banning these submissions has been simple, it’s growing at a rate that will necessitate changes. To make matters worse, the technology is only going to get better, so detection will become more challenging.

    Link to the mentioned magazine, which is for science fiction & fantasy: https://clarkesworldmagazine.com/

    They go into detail about accepting submissions:

    > Clarkesworld Magazine is a Hugo, World Fantasy, and British Fantasy Award-winning science fiction and fantasy magazine that publishes short stories, interviews, articles and audio fiction. Issues are published monthly and available on our website, for purchase in ebook format, and via electronic subscription. All original fiction is also published in our trade paperback series from Wyrm Publishing. We are currently open for art, non-fiction and short story submissions.

    (just summarizing a bit, since the title didn't make sense alone, then the site crashed and I wasn't that familiar with Clarkesworld in particular)

  • I see AI ending up under strict regulation so that all AI generated content has to be stored somewhere read only. Other highly regulated companies and law enforcement agencies will then have access to this AI web of content and will be able to search it to see if any submitted content, be that an essay or a forum post or whatever, is AI generated and to what extent. The public will still be able to access AI tools via the regulated companies but the scope of damage the individual can inflict will be limited. People caught using AIs outside of the state sanctioned companies will get harsh jail sentences to deter anyone from trying the same.

    I don’t think there’s any other way to avoid chaos. I doubt anyone is going to be able to come up with a reliable algorithm to detect AI content. We’re literally going to have to just store everything these AIs produce in plain text and search it when needed.

  • We’re really going to need a robust way to verify human actors on the internet:

    Authentically Human in a World of ChatGPT

    https://www.williamcotton.com/articles/authentically-human-i...

  • As software engineers, we need to take responsibility for the impact our software has on the world.

    I hope Sam Altman and everyone else who contributed code to ChatGPT is taking a long, hard look in the mirror and considering what it means to make the world worse with our labors.

  • Pretty soon the content will be better than we can produce anyway. You're fiddling while Rome burns. You need to pivot hard because your business model is dead. As are the jobs of almost all white collar workers. We have AI and its not like we thought, AI is people in a box. A person in a box is much cheaper than a meat sack in the real world and will only get cheaper.