AI has become a dirty word across almost every discipline over the past few years. As big corporations keep pushing this technology forward, a vocal resistance among creatives, critics, and passionate communities has risen up in opposition. While every creative medium is at risk of AI influence now, gamers are particularly sensitive about this technology sucking the creativity and human element from our beloved medium. Even the mere mention of AI being used in game development triggers a massive backlash, but we need to start being more nuanced in how we talk about the ways AI should and should not be used . Because, like it or not, AI is going to become more ubiquitous in gaming. We can’t keep talking about AI as though it is a black-and-white thing. It is a tool, and like any tool, there are ways it can be used appropriately.
The question we need to ask ourselves now is, when is it ethical to use and what crosses the line?
A blurry line
Game development is complicated. I say that upfront to acknowledge that it is easy for us to play armchair developer and say that AI shouldn’t be used under any circumstances, but the reality of the situation is very different. Developers, pundits, and analysts have all been shouting from the rooftops for years now about how unsustainable the current AAA landscape is, so at the very least, we can say that publishers are looking for solutions that cut cost, time, or both.
AI is the big bet right now across multiple disciplines, and that includes gaming. We’re already seeing players like PlayStation experiment with things like AI characters, while Steam is setting up flags to let players know if a game includes AI-generated content. It has been reported that Microsoft’s massive 2025 layoffs were done in part to fund its $80 billion AI infrastructure initiative , which will no doubt seep its way into Xbox’s massive portfolio of studios. Unless there’s some major piece of regulation put in place (which I could never see our current administration doing), then it is only a matter of time before it becomes the norm.
So, when is it okay? There are some clear examples of when it isn’t, such as AI-generated art, writing, or even entire games. Anything that we would hope has a human touch that comes from a person’s vision to communicate something to the player. No one wants to play a game made by AI , right? Okay, so that’s the easy part. But what about the less obvious stuff? We all seem to be okay with AI upscaling. That doesn’t hurt anyone and can be a huge load off developers’ shoulders. What about AI creating code? That’s influencing the game, but is invisible to the player if they weren’t told. Odds are a ton of games are being coded with AI assistance right now to cut down on some of that time-consuming technical work. That’s another way to be more efficient, so should we accept that as well?

The Alters got hit with a double-whammy of controversy recently over AI, and both are fascinating examples of how grey this entire issue is. The first is that one in-game display uses AI-generated text. This text is illegible under normal circumstances and was left in by mistake, with the intention of replacing it with randomized text before release. Is there so much difference between AI-generated garbage text and pre-generated text? I can understand how one feels worse, but isn’t the end result the same? The other example lands on the wrong side of the ethical line for most. Some of the in-game films the player can watch were added so late in development that 11 Bit Studios didn’t have time to localise them in all languages. So, they used AI to generate subtitles. That’s a bad practice that likely harmed the final product more than if those videos hadn’t been included, but it raises some interesting questions.
And then there’s testing. AI can stress test and find bugs thousands of times faster than a person, but now we’re threatening the jobs of QA testers. Replacing humans is where a lot of people draw that ethical line, so should we not use it here, despite the potential to speed up development? I would never call for people to lose their jobs, but it is a sad reality that some industries do die out as technology advances. If AI is best suited for brute force work like that, is that something that should be embraced? I don’t like slippery slope arguments, but I do think we need to be cautious as to what we support with AI, knowing that capitalism can and will push it to the limits. If these jobs are okay to replace, why not those?
Perhaps an even bigger question we all need to wrestle with is the exceptions to those rules. If we say AI music is unacceptable in games, is there any exception for a solo developer self-funding their game who can’t afford to hire a musician? Would it be better to launch without music or not launch at all? There are arguments to be made on both sides. Going back to the subtitle example, what if a team can’t hire a localization team? Is it better to not let players who speak another language engage with the product at all over using AI as a necessity?
I pose all these questions without answering them because I can’t. I can tell you where I fall on each of these issues, but that isn’t the point. What I am hoping to present are the grey areas where we can have productive discussions about when and where AI is acceptable, if we’re willing to approach it in good faith.
We can’t afford to lump all AI into the same bucket of “AI bad” anymore. It is too nuanced a tool with too many factors to make a blanket judgment call on anymore. Yes, we don’t
need
AI to make games — we’ve been doing it that way for decades. The issue is that games are so complex, time-consuming, expensive, and risky that we’re in an era where even successful studios are getting closed down. If AI has the potential to ease some of that pressure and make game development a slightly safer industry, we need to start having deeper conversations about when and where it is appropriate to use it instead of vilifying it as a whole.