Counterpoint: I think they usually look dumb, but seeing them does not prejudice me for or against the text of your article.
To clarify, I do sometimes get mad at actual photos that are poorly chosen, and the same is true of AI-generated photos demonstrating bad art direction, bad taste, or bad judgment on the part of the author or editor. But them being obviously AI-generated has nothing to do with it.
> I know there’s plenty of things you can roast my blog for but at least you know for a fact you’re getting the thoughts of a real human being and not some LLM.
In fact, I cannot know that.
I stopped using AI-images as social media thumbnails since a) they take a surprising amount of effort in order to make distinct and non-generic with techniques such as ControlNet [e.g. https://minimaxir.com/2023/03/new-chatgpt-overlord/featured....] and b) thumbnails don't matter anymore for personal blogs since social media sharing of links is dead.
My problem with the AI-generated images that most people use for their blogs isn't that they're AI-generated, it's that they're bad. People use, I think, mostly DALLE-3 to make these super-busy infographic type illustrations that are completely devoid of meaning because they're AI, full of mangled text and nonsense logos. They're incomprehensible, they maybe kind of convey the idea of "business" or "tech" or something like that but they make no real point.
If you're going to use AI to make a blog image, use it to make something identifiable. If your blog post is about tigers, and you use a picture of a tiger, it doesn't make much of a difference if the picture is AI generated or a stock photo. You didn't take it either way. If the blog post is about your prediction on the next year of tech stocks, use the AI to make a picture of something simple like a computer rather than some kind of Bayeux Tapestry of random tech-like things.
I think there's a vibe shift. I was obsessed with Dall-E when it first came out. Now I think these images look tacky.
Another thing that makes me close the page as soon as I can press CTRl-W is anime avatars, and if the blog is built like a back and forth discussion between two cartoon characters.
I agree - at least I take it as a red flag that the blogs content is probably also low effort slob and that I should assign it a rather low credibility
My take is that unless they’re hand drawn images drawn with home made pencils and paper it’s not even really art.
Most AI-generated images I've come across (I have in mind Substacks, Medium posts and other personal blogs) are purely "decorative" & having the same (lack of) purpose as stock photography - e.g. a generic photo of people exercising in an article about fitness. In informational articles I find these pointless & also maybe somewhat in poor taste. But then I'd probably feel the same way about regular stock photography.
There may be potential for AI-generated explanatory visuals though. High-quality diagrams, graphs, map of complex conceptual relationships and so on would be exciting.
I've come to feel the same. It feels very low-effort to use AI-generated content when communicating with an audience.
I don't think using AI-generated images is bad. I also don't think using an LLM to help write an article is necessarily bad, either. They're both just tools.
You're equating using these tools with low-effort. And that's often true. But not always.
In our own blog/newsletter (about data and AI engineering), we spend a ton of effort doing a lot of the development work and the research work that goes into posts. Often the post is summarizing an open-source project we've released, sometimes it's summarizing research we've done, etc.
But we're not skilled artists. Nor are we skilled copywriters. And many of us don't speak English as a first language, either. So we use AI to generate images, and we use LLMs to help sharpen the writing on the blog post.
I don't think that makes our work low effort.
To me as the viewer, AI images sometimes pass for real ones, because at a glance they successfully convey a concept. A slight sense of something being off prompts looking a bit deeper and it quickly turns into something akin to a "Spot the difference" game. Spotting one error leads to a chain reaction of finding more errors. Whatever art 'is', from a philosophical standpoint, I must assume it is more than just a medium to convey a concept. At the realization of it being generated, it sort of turns into 'un-art'. Whatever concept is conveyed by an AI image also inherently conveys the concept of "generated".
I wouldn’t mind the AI images if they didn’t all have that same exact same look you can spot a mile away.
Looking at you here, patio11.
I have the same feeling about Youtube video thumbnails
Yeah they discourage me as well.
But what I say doesn't matter. What this person says doesn't matter. What A/B tests say matters.
eh, do what you want imo
[dead]
Earlier this year I was visiting friends and one of them was watching some terrible YouTube videos full of ancient alien type nonsense and full of AI stock footage. Half of it didn't make any sense, and it was just a bizarre experience - mixed up nonsense blather illustration with nonsense imagery.
The friend who was watching this stuff is not the brightest, and he's extremely gullible. I'd rather he just watch spiderman or something, he'd probably be better informed about the world.