I’ve already gone off on tirades about generative AI, whether it’s in regards to finding reference art, or how it so obviously reads like the work of grifters and con men, or how it’s used to make execrable propaganda videos. And it always feels like I’m just stating the obvious. Circling around an idea without ever really getting to the heart of it.
And I’ve come to this conclusion: it is so obvious, and that’s why it’s so confounding. There are so many serious concerns already, and new ones seem to keep flaring up every week.1I started writing this post in response to one big flare-up in the Discourse, and at the time I’m writing it now, we’re already in the middle of an entirely new one. And these aren’t like vague or subtle concerns that require altruism or long-term foresight. It’s not like expecting the average American in the 60s or 70s to be able to grasp the dangers of microplastics, or, hell, expecting me around 2006 to appreciate just how profoundly toxic Facebook and Twitter would turn out to be. These are blatantly obvious bad things that are doing bad things to real people right now.
I’ve kept thinking that if you dig down deep enough, there’s some unifying thing (spoiler: there is) that justifies all of The Discourse (spoiler: it doesn’t). Because the one thing I know to be absolutely true is that the advancements in machine learning are a breakthrough, able to parse and process data in ways that feel genuinely magical. But every time I’ve tried to make the leap from that into coming up with a justification for generating an end result — the limited and narrow domains for which it still makes sense, once the hype bubble has burst — I’ve come up short.
It feels like everything is building off the basic assumption that of course these systems have a reason to exist, without bothering to convincingly justify their existence.
I guess if nothing else, it’s helped me better appreciate Lovecraft-themed horror games. I could never quite wrap my head around the idea that simply seeing something that shouldn’t exist could drive you insane. I think I get it now, because it’s maddening to see so many people flat-out denying the obvious. I’m extremely aware that “gaslighting” is a term that’s become so overused in online discussions that it’s become almost meaningless, but: we can all see that that’s exactly what’s going on here, right? Right?
Just one example: the way that Google’s Gemini has now been put at the top of every search result page, even though it is, to put it tactfully, utter dogshit.
I keep seeing a screenshot going around of someone typing in the prompt “is marlon brando in heat,” and Google’s AI Overview responds that no, he’s not, because Marlon Brando was a legendary Hollywood actor, and being “in heat” refers to the reproductive cycle of animals, and therefore does not apply to humans.2Apparently Gemini never saw Last Tango in Paris am I right?! 3I don’t know if that joke actually makes sense; I never saw the movie and only know it from its reputation.
Now, I still don’t know if that’s a genuine screenshot, or if it was faked in order to make a point. I don’t even know if that point was “let’s point and laugh at how shitty generative AI is!” or the more insidious “isn’t it amazing that generative AI is getting tripped up on the same linguistic issues that humans can, these systems are just like us!!!” Not being able to trust anything is one of those many concerns that’s a whole topic in itself.
But I can at least post a screenshot of the results when I typed the same query into Google:

I guess at least it gets the context of “in Heat” right. But it’s still just obviously, unacceptably bad on every conceivable level. Maybe the least offensive thing about it is that they’ve invested billions of dollars into a technology that makes it so that computers can’t do basic arithmetic.
The factual errors mean that none of the information is reliable without checking its sources, and it now requires more steps (intentionally) to get to those sources. It’s unnecessarily wordy, wasting your time, which entirely defeats the purpose of a summary. And like all generated AI text, it reads like someone bullshitting you to pad out a word count.
When I was first writing this post, I’d started to go off on a whole rant about how this was so much worse than the stupid “reproductive cycle of animals” version, because at least that one was funny. This is just wrong and bad in a dozen different uninteresting ways. Which is insulting, considering that it’s coming from the flagship product of one of the richest companies on the planet.
That’s what made it finally click for me: the realization that if I worked for Google, I’d be embarrassed by this.
It made me remember the year I wasted by applying to various jobs within Google. Even though I’m bringing that up several years later, I’m not really bitter about it4No seriously, I’m not bitter. Please don’t write on the blog that I’m bitter. because at the time I was already old enough to know that working at a place that’s a bad culture fit is just a miserable experience. To be fair to Google, only one of the interviews treated me with open contempt, and only most of the recruiters or hiring managers outright ghosted me. But several of the people I spoke to were perfectly nice!
But even then, there was this unsettling feeling that the power dynamic was completely off, even by the already skewed standards of a job interview. I got the unmistakable vibe that the people I spoke to were interested in my experience and my ideas in the same way that you make small talk with a child, asking them about their favorite Pokémon.
I realized that if, during the interview process, someone had asked me, “Is Marlon Brando in Heat?” and I’d responded with the AI Overview response above, I’d have been summarily escorted off of the campus. And yet here it was, the result of a multi-billion dollar investment into defacing their core product. The difference is that they can get away with it.
And I mean, I get it: this sounds like the kind of observation made by the most tedious person you know on social media, who always tries to frame every single issue in tech as an example of class warfare. I still use Instagram (albeit increasingly begrudgingly) even after decades of hearing people screaming “you’re the product!!!” because it’s exhausting to overthink everything, instead of occasionally just shrugging and doing what’s most convenient.
But it’s remarkable to me how every single concern about generative AI that I can think of ultimately breaks down into a question of keeping as much money and power as possible in the hands of the “right” people. That’s the unifying thing, the aspect that has adult human beings putting forward the most preposterous ideas, and defending things that we can all immediately tell are indefensible.
And it’s useful to look at all of it from that perspective, since the investors and evangelists5Who, never forget, are almost always investors have increasingly tried to take a “bad cop/good cop” approach to pushing this dogshit onto us. When it’s not an outright case of “you’ll use this stuff because we say you’ll use it, so just get on board and stop your whining,” then it’s claims that this is all some grand, democratizing technology that will make everybody’s life better, so what’s everyone so upset about?
Billionaire Mark Cuban has been particularly vocal with this, saying essentially that all of the histrionics of terminally online people are just protesting progress, complaining about something that will obviously make creative and technical people’s jobs easier.
At the time I’m writing this, the CEO of a previously-beloved game studio is burning through their accumulated good will by trying to defend their use of generative AI in the pre-production and concept stages, insisting that it’s a benefit to the actual creative human beings making the game and not ending up in the final product, so what’s the problem?
You can respond that it’s during the concept stages where the most satisfying and interesting work happens, since you’re not just deciding on what works, but learning what doesn’t work and why. Using generative AI images as reference to create something original is like using Velveeta to make a bold new take on mac and cheese.6And I’m saying that as someone who genuinely likes Velveeta. Everything distinctive has been averaged out into a Sucker Punch-style blandness; it may look fine on the surface, but feels ultimately empty.
And all of that’s completely true, but it’s also ultimately circular and unproductive, because it doesn’t address what’s so appealing about using generative AI during the concept stage:
- We care more about making it cheap and quick than making it original
- We, meaning the executives and creative directors, already know what we want
- We want you to execute on that efficiently, without wasting time, misunderstanding our direction, or introducing superfluous noise
- What do you think this is, some kind of collaboration?
Even the most charitable interpretation is that they’re demanding efficiency from the exact part of the process that benefits the most from being inefficient. The part they’re not saying out loud is that they’ve already made a value judgment of the human work that’s valuable vs the work that can be replaced by generative AI without losing anything significant.
Even if you see yourself as pragmatic, you’re not swayed by any higher-minded arguments about the magic of the creative process and the value of learning by doing, and you only care about the end result, the end result seems apparent: making the hierarchical structure more rigid, putting the creative work into the hands of fewer and fewer people at the top, obviating the work of “lower” people.
And even if you’re fortunate enough to be one of those people comfortably high enough in the hierarchy that generative AI is being sold to you as a tool to make your job easier, you’d have to be pretty naive not to see it as a clear case of training your replacement.
In every industry, trying to automate the work of entry-level employees means denying opportunities to people to learn and develop their skill sets. That’s not some touchy-feely take demanding that companies be altruistic, either: it’s just plain common sense. Generative AI isn’t a tool to aid workers; it’s a worker.
After all, the technology that can parse a natural language prompt to generate an image, a passage of text, or a block of code, could have instead been devoted to generating hyper-relevant search results specifically tailored to a query. “Here’s a bunch of examples of how other human beings have solved this problem, or expressed this idea.” Wouldn’t that be the ideal of democratizing technology? Fostering collaboration between people, sharing ideas and showing each other what we’ve learned?
None of the companies seem to be particularly interested in this, though. Because collaboration isn’t a growth industry. There’s not much monetization opportunity in pointing people to other people’s work, aggregating original sources and giving proper credit. What if you could instead create a kind of money-laundering scheme, not just for wealth, but for creativity and copyright?
If you’ve ever tried training a machine-learning model, you’ve seen what an involved task it is to aggregate and tag all of the training data. (That’s why we’ve had websites asking us to click on pictures of stop lights and cross walks for decades). To get the enormous data set required for even the dogshit Gemini response above, you need the kind of resources that only huge companies with billions of dollars of investment can manage.
Not to mention having the legal weight to render copyright invalid, the funds required to pay for class action suits, and the lobbying power to get local governments to see dollar signs in their eyes and ignore all of the environmental concerns surrounding increasingly huge power- and water-hungry data centers. There’s nothing at all democratic about any of that; it’s all about making it so that only the richest companies can control it.
Even if you’re working on your own, not swayed by the whole DIY philosophy, by the idea that the best way to learn is by doing, or by arguments that “if you take shortcuts you’re only hurting yourself!” it should still raise alarms when you find yourself getting increasingly dependent on tech that demands you treat it as a black box owned by a huge company.
I’ve gone back and forth on what I consider to be an “acceptable” level of using stuff like VSCode or ChatGPT to generate code. Is there a domain or an application where it makes sense? I never use it myself, just on principle; I’d almost always prefer to do it myself rather than rely on something I might not understand. But for other people?
To me, it just feels like regressing back to the time when I’d just started programming, when everything was locked down. I could never afford Borland C++ or Turbo Pascal, or even Microsoft BASIC for the Mac. It seemed absolutely impossible for me to make a game without getting a job at a big developer or publisher, simply because I couldn’t afford to buy the tools. So today, it’s still astounding to me that compilers and IDEs are included with operating system releases, and even more astounding that full development packages like Blender and Godot are freely available.
Now that we’re heading in the right direction, why are people eager to go back to being dependent on OpenAI or Microsoft or anyone else?
You’ll frequently see people comparing generative AI to cryptocurrency and the blockchain, because the hype bubble is so familiar. But the similarities run even deeper, because they’re both pyramid schemes at their core.
That’s why they both have the constant refrain of “get on board or be left behind.” It doesn’t work for the people at the top unless all of us at the bottom ignore all evidence that it’s a bullshit scam, and instead furiously clap our hands and believe in fairies.
If this were genuinely democratizing technology, why do I have to rush to get on board or be left behind? Doesn’t it seem like I could just hang out, being slow and enjoying a weird sense of satisfaction from learning things, and wait for the geniuses to figure everything out? Based on the results above, Google’s Gemini was clearly shoved into everyone’s face long before it was ready for the public, which makes me wonder what was the big hurry.
If I were the suspicious type, I’d start to think that maybe this isn’t the futuristic vision of an anti-elitist creative utopia that will benefit everyone, like they keep claiming it is. I’d start to think it was just another scheme to keep all the power, money, and information in the hands of the “right” people.
Meanwhile, I’m going to do a price comparison on building my own PC exactly how I want, now that the cryptomining bubble has burst, and video card prices have gone down… oh wait hang on, they’re still crazy high. And also I can’t afford RAM since those prices have quintupled because of the demand for AI data centers. I guess my only options were buying NVIDIA stock 10 years ago, or buying a pre-built PC from a big company that can afford to buy components in bulk. Sorry, that’s completely irrelevant to this whole post, though.
