Last night, while working on my game for the Playdate, I was trying to draw a quick-and-easy version of the classic “pointing finger” from vintage signs. One of the most time-tested and recognizable images there is, so this would be as simple as it gets: choose one out of what must be dozens of reference images on Google, and whip out a smaller, one-bit version in Pixaki.
What the image search served up instead was page after page of AI-generated versions of that classic image. You could find them in every possible hand pose, pointing in every possible direction, in a variety of styles ranging from fake-woodcut to fake-engraving, in a spectrum stretching from big and cartoony to smaller and more realistic.
I could have (should have?) stopped there and just gone with one of those, and it would’ve made absolutely no difference. I just needed reference a simple drawing that I was going to do, instead of taking assets to drop directly into a game. And I should emphasize that the final product would be under 60 pixels square. But it still didn’t sit right with me.
I wanted to find a photo of an actual sign. Even if it was a photo of a sign from a theme park, or a Wendy’s1Are people old enough to remember when all of Wendy’s branding was turn-of-the-century inspired, and there were recreations of old newspapers printed on the tables?, which was itself several generations removed from the originals, those were at least made before generative AI was introduced into the process. (Not to mention that at this point, those versions themselves are “vintage.”)
But even then, all the photos that turned up were from places on Etsy or eBay, selling faux-vintage signs that were themselves printed with AI-generated slop. I was shocked by how difficult it was to find a genuine photo of even an imitation of such a commonly-used image.
So I ended up falling back to my absolute last resort: just taking a picture of my hand and drawing that. People keep saying that anyone who hates generative AI is just rejecting the inevitable march of progress and refusing to take advantage of modern technology, and at least in my case, they must be right. I took a digital photo using my handheld computer camera, which was instantly copied over the cloud to my tablet computer, where I used a wireless, pressure-sensitive stylus to trace over it, and then sent it to the laptop computer on my desk. Just like my ancestors would’ve done.
The only thing that keeps this from being another case of “cool story, bro”2And it still could be, I dunno. This blog’s free, stop complaining. is that it made me consider the entire farm-to-table pipeline of generative AI, and how it’s such a gross failure at literally every single step of the process. It’s easy to concentrate on one specific way that it sucks, because the failures are so immediately apparent that pro-AI supporters (aka “people who are financially invested in it”) have incorporated the failures into their propaganda.
Nightmarish blobs and misshapen hands? Ha, isn’t that funny and charming — look, these smart systems are actually learning! It’s getting harder and harder for nerd Luddites to go online with suspect images, circling the offending parts in red. At the rate these things are improving, they’ll inevitably be flawless!
These systems are a huge drain on polluting energy sources and water supplies? Never bet against the inevitable march of technology, friend! As processors get more efficient, and models get more advanced, we’ll soon see all of this investment pay off!3I actually saw a chucklehead on Bluesky insisting, repeatedly and unironically, that environmental concerns were “overblown” since the resources used by data centers were still less than the global transportation infrastructure of cars and planes.
The models are train with work stolen from real artists, writers, and actors? Oops, our bad! Ignore that, since it’s all water under the bridge (and diverted to data centers for cooling), and we’re all about consent these days. Plus there are plenty of companies touting models that are trained “ethically” and run locally without being connected to the internet, and those claims are every bit as rigidly-defined and trustworthy as, say, seeing stuff in a grocery store that’s labeled “organic.”
The jobs of artists, writers, animators, actors, and filmmakers that are being eliminated to be replaced with generative AI? It’s all about efficiency, chum, letting people accomplish their jobs with less effort! (Just ask any of the tens of thousands of people who’ve been laid off to compare the amount of work they had to do today with the work they had to do even a few months ago! Ha ha, I can joke because I’m in management and we’re under no threat from this stuff).
It’s become so omnipresent so quickly, being shoved into more and more applications where it doesn’t belong, that it’s gotten all but impossible to avoid it? Ha, I know, right? Crazy days in 2025, I’m right there with ya, but since it’s so clearly inevitable, we might as well learn to live with it!
And of course, there have always been the more esoteric concerns about the ethics and morality of “thinking machines,” which have been floating around for as long as we’ve had even the concept of artificial general intelligence. What amazes me is how thoroughly and shamelessly AI proponents (aka investors) have incorporated skepticism and concerns into their propaganda. Have we learned nothing from Star Trek: The Next Generation? If these unfathomably advanced systems can actually think and feel, is it even moral for anyone to propose just… turning them off? We don’t even fully understand how they work!4I’d actually forgotten how Open AI tried to use the “Jane! Stop this crazy thing!” defense against allegations of stolen training data, insisting that there was so much data being used to train its models, and the black box of complex transformers was so inscrutable, that it was literally impossible to trace anything back to an individual source. And apparently didn’t get enough push-back from people saying, “But that’s worse. You understand how that’s worse, right?”
There’s a filmmaker named Sergio Cilli who’s been making videos cutting through the hype of generative AI, not by making impassioned protests about the ethics of training data, or the environmental costs, or marking offending portions of gen AI output with red circles, but by getting back to the most pragmatic basics: this stuff’s just plain not good. Many of them show a director trying and failing to get AI-generated video to deliver anything usable for even the most basic, unimaginative, and prosaic tasks, the only kinds of things that AI slop would theoretically be good at. Brief commercial spots, scenes from sitcoms, cliched horror movie moments. It fails to pass even the lowest of low bars.
His latest at the time I’m writing this is responding to a video from someone in Germany, which is a shamelessly manipulative bit of propaganda intended to look like a PSA interviewing children asking them how it feels to be AI-generated. (Cilli’s best line is “whoa, don’t show the Kool-Aid, just take a few sips!”)
The propaganda video is a perfect example of how deeply cynical these things are in their manipulation. It announces from the start that everything is AI-generated, but it still uses all of the stereotypical signifiers of “authenticity,” by generating shots that are made to look like they’re a peek behind the scenes. (Which are themselves gross perversions of reality, but I could spend hours ranting about that). It’s yet another example of the age-old trick of grifters and con men, trying to make you suspicious of one thing as a distraction from what you really should be suspicious of.
I don’t doubt that some viewers will be affected by the maudlin emotional manipulation, and some will be swayed by the idea that they’ve just seen a dire warning of the chilling implications of generative AI. But it’s cynically designed to work on a viewer coming at it with almost any level of skepticism. It’s not trying to convince you of anything other than eroding your trust and confidence. Whether it’s the guileless “maybe there is more going on in these systems than we yet understand!” or the defeatist “if they can make something that looks this real, does it really make a difference whether it’s actually real or not?”
When you can generate dozens or hundreds of these things at low cost, you don’t even have to come up with the one perfect video that goes viral and most convincingly makes its point. You just need to have them suggest an idea and then flood every social media platform with them so that the idea sticks with enough people.
And as always, the main idea that they’re trying to convince you of isn’t that generative AI is good. It’s that: it’s convincing, it’s increasingly difficult to distinguish from reality, it’s so huge and sophisticated that laypeople can’t possibly even understand how it works, and that it’s inescapable and inevitable. You don’t even have to think that any of those things are good; you only have to believe that they’re true.
I think the most damning of Cilli’s videos is one where he doesn’t try to be funny. He just shows the output of one of these systems. He tries, out of curiosity, to get the most-hyped AI “actress” to recreate a genuinely classic scene, from Diane Keaton’s performance in Annie Hall. And the results aren’t just bad, but too bad to even be funny. It’s just repulsive.
The reason it’s not funny is because there’s no specific thing to make fun of, nothing that can be fixed to improve it. It simply has no reason to exist.
And that’s why pointing out “tells” or errors in something that’s AI-generated is kind of missing the point.5Even if I agree entirely with the overall sentiment, and am happy to reject anything if AI was used anywhere in the production of it. If I were a more poetic person, I could probably come up with some kind of poignant analogy of how, at their most fundamental level, these systems are designed to generate errors and eventually incorporate them into a “correct” result. That’s similar to how the generative AI hype bubble works: the more you criticize any particular aspect of it, the more it gets fed into perpetuating the hype bubble.
It ignores the most basic question, which is why does this exist in the first place? A while ago, an artist at a comic book convention was suspected of using AI in his “original” art, and there were several people posting pictures of it with the offending parts circled. I looked at it and thought, “Yeah, I think I see the problem now. The whole thing sucks.” It was just generic, uninspired, derivative, non-art. The idea of it having been created with AI was one of the least offensive things about it, because nobody honestly expects a computer to have taste.
The technology has progressed to the point where I have a hard time reliably identifying when something has been made using generative AI. Honestly, a lot of the time, it just has the same look of rushed sloppiness of something I would make: outright forgetting to include a detail, failing to get perspective or proportions right, finding it hard to draw a certain thing and just replacing it with a random blob.
It’s gotten to the point where “false positives” are getting more frequent — social media will go on a campaign calling out something as having been clearly AI generated, and the artist responds with proof that it wasn’t. I don’t think this is the “gotcha” that some people seem to think it is, though. I know if I’d drawn something that was either sloppy or blandly generic enough for someone to assume that it was made with AI, I’d be prompted (ha!) to seriously reconsider what I was doing and why I was doing it.
Maybe instead of seeing it as an environment in which we can no longer trust that what we’re seeing is “real” art, which is depressing, we should see it as a case of people being more critical and more aware of what we value in art? Recently at a Halloween event, my husband and I were drawn to T-shirts that were for sale, and each of us became suspicious that they’d been generated by AI.6As it turned out: his wasn’t, but I’m still pretty sure mine was. We ended up not getting them, because we weren’t sure. In retrospect, I’m actually grateful, since it was a reminder of how often my personal taste trends towards Basic.
Why did I find it unnerving that I couldn’t tell whether this thing was real or not, instead of finding it unnerving that I was actually tempted to buy something that was so blandly inoffensive that I couldn’t tell whether it was real in the first place?
For a while, it’s been a bummer that a particular style of art and character design that I like a lot has become common in exploitative free-to-play games. So common, that it’s kind of ruined the style for me. It’s similar to how the specific style of cartoon person that pops up over and over again in generative AI happens to be exactly the style that I aspired to be able to draw someday. Depressingly often, I’d see a cartoon that had been shat out by some AI model, and think, “that’s exactly the style of character I had in mind that I’d eventually get adept at drawing, if I only practiced enough.”
I hope it’s obvious to you — because it wasn’t obvious to me until just recently — but that doesn’t say as much about the depressing ubiquity of generative AI, as it does about my own taste and what I aspired to make. Those designs are appealing to me because being instinctually appealing is the entire reason they exist. They are all but literally the average of all art that people have liked enough to share7And tag for the purpose of training models.. It’s a process not unlike turning the wild, imaginative, and often weird concept art for the Toy Story movies into the globally-appealing characters that ended up in the final product. Just with a source data set that’s orders of magnitude larger, and iterated over near-infinitely more generations. I like it because it has been manufactured specifically to remove any stray detail that might make me not like it.
It would be like declaring you have a crush on the average Apple executive that Ashur Cabrera generates every so often.
Which goes back, finally, to my gripping story of the pointing finger art. The main reason I could find nothing but gen-AI in an image search is because the thing I was looking for was so common and predictable that it’s been used thousands of thousands of times before. Most often to give off exactly the same type of “vintage” vibe that I’m using it for. It was the first thing to come to mind because it’s the biggest cliche.
I don’t even think that that’s bad, in this case, because it’s purely functional. Sometimes the easiest answer is the best answer. But the whole process of immediately going online to try and find a source image, instead of going to my own hand right there at the end of my arm, seems more significant. Like, universe slapping you on the back of the head to make you notice something significant.
It’s basically the whole gestalt of the generative AI that I dislike so much. Wanting something immediately. Choosing something predictable because it’s fine, and it doesn’t really need to be anything special. Being satisfied to recreate something that I’d seen before, hundreds of times. And trying to find a facsimile (of a facsimile of a facsimile) of the thing, instead of just looking at the thing itself. They’re all things that require ignoring the fundamental question of “why does this need to exist in the first place?”
I’ve seen very eloquent statements about the satisfaction of creating something truly original, the way that the process of creation is an inextricable part of the final product, the way that we learn so much during the process that we would never have learned if we’d just leaped from the idea to the final product. And they’re all true, but they’re also all preaching to the choir. People who can fully appreciate that and feel inspired by it are not, by definition, the target audience for generative AI.
The audience is more people like me. Or at least, a younger version of me. Honestly, if this stuff had been around twenty years ago, I would’ve been a lot more open to it.8In fact, I was fascinated by AI “art” filters on Instagram for an embarrassingly long period. I’d have been open to the possibility that the concerns are very real, but we’ll find ways to address them, and after all the gold rush hype, we’d be left with a limited set of applications where using generative AI makes sense.
Because I tend to think I’m better at ideas than I am at executing on them, and I’m better at words than all the various kinds of art that bring words to life. I aspire to being a good artist, but lack the time (or patience) to dedicate to being satisfyingly great at it. I’m a programmer who’s often satisfied to use code that’s been proven to work, even if I haven’t taken the time to figure out for myself exactly how it works. And I have zero aptitude for voice acting or music creation, and have rarely been operating with enough budget to afford people who are good at it. In other words: I’m the perfect mark for a generative AI grift.
Had this stuff been readily available twenty years ago, I can all but guarantee that I would’ve made a lot of really uninspired garbage with it. Because the easier it is to make something, the less likely it is that it was worth making in the first place. I have had a lot of ideas over the years that I was dead-set convinced would be solid gold if I could just get them made; it was only a lack of time and resources that stood in the way. And I have completely forgotten 99% of these potentially world-changing flashes of genius.9On second thought, maybe they actually would’ve been world-changing? And the lack of them is why everything sucks now?
I’d been surprised by how quickly the push for generative AI seemed to skip past evangelism and go directly to bullying. You don’t want to get laid off like all these tens of thousands of other people? You’d better jump on board the train or you’ll be left behind! Don’t like it? Tough, we’re going to put it everywhere, whether you like it or not.
With every “advance” of the technology, it just becomes more and more apparent that the reason is because there’s nothing at the core of it. It’s just not good. Everything that’s genuinely amazing about it — and anyone with an interest in technology has to be honest and admit that it is genuinely amazing to be able to generate even seconds of realistic-looking video from a text prompt — is all on the surface level. Once you try to build anything on top of it, it either falls apart completely (which makes the news and gets lots of attention on social media) or more often, it becomes immediately uninteresting.
Even if there were a narrow set of applications where it was useful — which seems increasingly unlikely — those are not the kind of applications that make for impressive tech demos that will have people investing billions of dollars.
So I’ve developed a pretty pessimistic and dismissive attitude about generative AI over the past few years, but that was without even asking the most damning questions of all: why does this exist? And even if it could do everything it claims it can, what does it say about me that I would even want that?
