Microsoft’s latest foray into AI-driven gaming with a demo inspired by Quake 2 has stirred up quite a conversation. While the tech giant aimed to showcase the potential of AI in gaming, the results have been mixed, raising questions about its execution and the hefty resources it demands.
This isn’t the first time a big name has tried something like this. Remember when Google impressed us all with their browser demo of Assassin’s Creed: Odyssey? Microsoft’s attempt, however, seems to have missed the mark in terms of necessity and finesse.
The demo is an interactive space, heavily influenced by Quake II, designed to highlight AI’s capabilities by dynamically generating visuals and actions. According to Microsoft, this was supposed to revolutionize gaming interaction. They used something called the World and Human Action Model (WHAM), which works a bit like those large language models you’ve probably heard about, using gameplay data for training.
Interestingly, they’ve moved away from the original Quake 2’s id Tech 2 engine, opting for a new engine trained specifically on Quake 2. Despite pouring significant resources into AI, the demo has been criticized for its performance. Developer Sos Sosowski pointed out that keeping things running smoothly required more than three megawatts of power—imagine thousands of solar panels!
Critics have been vocal about the demo’s performance, likening it to a slideshow with AI-generated visuals that can make you feel queasy. The Copilot Gaming Experience feels a bit aimless, creating random environments that change as you move, which can be pretty disorienting.
Generative AI in gaming is a hot topic right now, but Microsoft’s latest demo hasn’t quite lived up to the buzz. The significant computational demands and underwhelming outcomes make us wonder about the future of AI in gaming.
While this demo was meant to wow audiences, it’s instead highlighted the challenges of seamlessly integrating AI into gaming. It’s a reminder that we might need to rethink the current path we’re on.