According to Kotaku, Larian Studios cofounder Swen Vincke has elaborated on the studio’s use of AI after his initial comments ignited a major backlash from fans. Vincke told Bloomberg the company is using generative AI for placeholder text, meeting presentations, and concept art references, but adamantly stated it would not appear in finished games like the newly announced RPG Divinity. The controversy prompted Vincke to post a new statement on X on Thursday, December 18, 2025, saying it would be “irresponsible” not to evaluate new technologies. He has now promised a Reddit AMA in January 2025 featuring different departments to give fans more insight and address their concerns directly, following a week of heated online debate.
The Backlash Is Real
Here’s the thing: the reaction wasn’t just mild disappointment. It was outrage. For a studio like Larian, which built its reputation on meticulous, human-crafted worlds in games like Baldur’s Gate 3, even dabbling in AI is seen as a betrayal by a vocal part of its community. The core ethical argument is well-known—that generative AI models are trained on artists’ work without consent or compensation—but the emotion here is deeper. It’s about purity. Fans worry that the “secret sauce” of Larian’s magic could be diluted, even if it’s just in early concept stages. They don’t want any “AI slop” in the pipeline, period. And you can’t really blame them, given how much soulless, AI-generated content is flooding other corners of the internet.
Vincke’s Tightrope Walk
So Vincke is walking a very fine line. His full statement on X is a masterclass in corporate-community diplomacy. He anchors everything in Larian’s “DNA” of agency and empowering people. He’s not backing down from the premise that exploring tech is a director’s job. But he’s also clearly listening, hence the planned AMA. The subtext? “Trust us, we’re still the good guys.” He’s trying to separate Larian’s “experimentation” from the industry’s feared race to replace humans. Is it working? The proof will be in that January AMA. If it’s vague, the fury will return. If it’s transparent, maybe he rebuilds trust.
The Bigger Picture Problem
This whole saga highlights the impossible position for studios now. Completely ignoring AI tools seems naive, even if just for internal brainstorming or speeding up tedious tasks. But openly admitting you use them is a PR grenade. The term “AI” has become so toxic in creative circles that it might as well be a curse word. So what’s a studio to do? Probably what many already do: use it quietly and never, ever talk about it. Vincke’s honesty, ironically, might be his biggest headache. It forces a public conversation the industry isn’t ready to have—about what “ethical” or “responsible” AI use even looks like in a creative field. Is there a version that doesn’t enrage your fanbase? We’re about to find out.
