top of page
  • Writer's pictureJesse Humphry

The AI Bubble Is Getting Ridiculous

Updated: Sep 8, 2023

Artificial Intelligence has been around for as long as video games needed to simulate the inhabitants of a world. Their purpose in that forum has never been to inherit the consciousness and thoughtfulness of their potential real-world peers; it has always been to serve as a piece of a puzzle that the core gameplay is meant to solve.

However, modern AI advancements typically stem from some kind of machine learning. Image processing models like Midjourney provide a prediction of the desired image based on a prompt, trained on a mountain of data and metadata. The results are rarely good, if ever, but a single success out of a thousand missteps seems to prove a point to the less technologically-inclined among us.

ChatGPT, a large language model, dominated headlines and brainspace when GPT-3 was officially released. Its engagement and responsiveness is, without a doubt, impressive. But an understanding of how ChatGPT works immediately shatters the illusions, and a request for information it doesn't actually have sends you down a rabbit hole of reclarification, revision, and redundant responses.

And while these existing AI can absolutely serve as productivity tools for organizations - namely, GPT could be used like a documentation repository - the strongest cases for the use of new and poorly-understood technologies almost always focus on games. When crypto was still actively shilling for buy-ins, games were a major focus of the product. The ability to "play to earn" was touted as though that was a completely normal desire. So, too, have the recent advancements in LLMs and voice processing infected the video game space.

Nvidia's recent keynote is so far the pinnacle of this absurd hype around a technology that few people genuinely understand. The lifeless droning of the AI voice, the rigidity of their responses, and the railroaded inputs of the "player" demonstrate the absolute peak of this technology thus far, but worse still are the implications behind the infrastructure that powers this.

First and foremost, modern LLMs can't be run on your home computer; if they could, they'd eat all of your CPU, GPU, and RAM like an unsupervised toddler trapped in a Lego exhibit. These models require dozens of high-spec processors to run a single request. The mathematics behind these AI are so dense that I'm honestly sympathetic to people who don't know how it works; I barely know how it works too.

So the demonstration uses a cloud service to run the model. That means they have to make API calls and get data back. For those of you who haven't set up cloud services before, that is a high cost to incur for a single interaction. Now add in a dozen interactions inside of just a single encounter, and then the hundreds of possible interactions in a game, and you can see how the costs will balloon. Because single-player experiences are the clear target for this, the economics of single-player sales and the costs of running the AI simply don't work out. It's only a matter of time until the API service is shuttered and a huge selling point of the game is inaccessible a couple of years after release.

That's not even getting into the fact that this kind of service requires the player to always be online to use it since, as noted, these models are too complex to run on consumer hardware. EA received a high volume of (justified) backlash in 2013 for Always On DRM and mandatory internet connection to access non-essential features of the game, and dozens of titles have been released with Always-On DRM to massive criticism from the gaming community. For the most part, the response is to remove the mandatory connectivity. Players are not going to put up with that for single-player experiences unless the AI voicing and dialogue are absolutely flawless.

How integral is an LLM going to be to the story? If not at all, why is money being spent on it? If it's important, can you guarantee that it will always be available alongside the game? How are you going to sustain it when the initial bulk of sales dries up? How long will it take until it starts eating so much of the profit that it's a waste to keep the model on the cloud? This is, after all, not an artistic endeavor; it's an industrialist one. It's the desire to be able to replace the human input and bypass human limitations, utilizing machines to do an endless amount of work. No voice actors need apply if the AI can synthesize all of the voice lines, undoubtedly with mixed results each time. No writers need apply when the bulk of the world's data can be tossed into various, loosely related documents translated from scribbles on an Applebee's napkin covered in sweat from being inside the Idea Guy's pockets for too long, regardless of whether or not the AI even knows how to use fucking contractions.

So why am I saying all of this?

Because the hype wouldn't be nearly as thick if the people who are hyped about these technologies had an ounce of sense to stop and think about the actual utility of the technology that they're shilling. It's like these people didn't learn from the dot-com crash or the crypto grift. Investing your time, energy, and money in a product that you don't understand (but can magically see the potential of) is no different that the infestation of "idea guys" into the game developer space. They think their "ideas" are valuable and better than anything that's out there now, but they almost never have the skills to execute on the vision because their stock and trade isn't the actual work, it's the uninformed vision.

And they're thoroughly convinced that if they attach themselves to enough hypetrains, they'll eventually be right once and can strut around like a pigeon on the chessboard, knocking pieces over and declaring how smart they are for having seen the inevitable future.

Do you think I'm being harsh? Talk to one of these people. Ask them questions about the utility of AI in video games or in the workplace. Then try challenging that with the current shortcomings, the hurdles preventing the widespread application, and watch as the only response you receive is "This is just the early phase; it's going to get better". No solutions, no introspection, no externalized doubt. They're as robotic as ChatGPT in their predictability.

It's as predictable with AI pushers as it is with crypto enthusiasts. They don't understand either the technology they are promising nor the technologies, systems, and fields that they're trying to affect. For that reason and that reason alone, they should be summarily dismissed, relegated to setting their money on fire and throwing it into a pit with the rest of the money they spent going to Web3 seminars and SEO hangouts.

For all of the claims of being a groundbreaking, world-changing technology powered by existing AI, that keynote delivers an astounding 30 seconds of a tightly-run ("non-scripted") interaction with an AI. Further claims are made about what the AI is going to do behind the scenes; it's going to run facial animation, it's going to run body animation, it's going to exhibit personality.

None of this is even remotely demonstrated, which is of course the idea of a keynote; it's not about existing capability, it's about a big idea. Huang suggesting that this is all "real-time" doesn't actually suggest that the questions posed by the "player" are somehow unscripted (just listen to the stiff tone of said player). It also is left unclear whether the demonstration is being performed live, or if "real-time" simply means that the AI was being actively polled and providing information during the recording session. There's also no telling how many attempts were necessary to get something usable.

There are so many more questions than answers, and no sufficient answers to the questions that have arisen, but it doesn't matter to people for whom the line must always go up.

13 views0 comments

Recent Posts

See All

Why I Hate the Metaverse

Warning: This opinion post will contain political language that you may find disagreeable or perhaps even hypocritical. Please note that...


bottom of page