The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 29 Aug, 12:06 AM UTC
10 Sources
[1]
Doom Running on a Neural Network Is a Surreal Dreamscape
We've already seen the iconic 1993 video game Doom being played on devices ranging from a candy bar to a John Deere tractor to a Lego brick to E. Coli cells. Now, researchers at Google and Tel Aviv University have taken the viral trend even further, by using a generative AI model to run the game instead of a conventional video game engine. The results are about as trippy as one would expect, as seen in a video shared by the researchers, with bad guys morphing in and out of existence and walls shifting unnvervingly. Visual weirdness aside, it's still an impressively faithful rendition of the 1993 video game and a striking demonstration of the power of the tech. "Can a neural model running in real-time simulate a complex game at high quality?" the researchers wrote in their yet-to-be-peer-reviewed paper. "In this work, we demonstrate that the answer is yes." "Specifically, we show that a complex video game, the iconic game Doom, can be run on a neural network," they added. Conventionally, video game engines react to user inputs and visually render the scene according to a manually programmed set of rules. But by harnessing the power of diffusion models, used by most mainstream AI image generators like Stable Diffusion and DALL-E, the researchers found they could ditch the approach in favor of AI. Their new diffusion model, dubbed GameNGen, is based on Stable Diffusion's open-source version 1.4 and was trained on 900 million frames taken from existing Doom gaming footage. GameNGen produces the next frame depending on the user's input, effectively acting as an illusory game engine. "While not an exact simulation, the neural model is able to perform complex game state updates, such as tallying health and ammo, attacking enemies, damaging objects, opening doors, and persist the game state over long trajectories," the researchers wrote in their paper. The researchers, however, admitted there were some clear limitations to their approach. "The model only has access to a little over 3 seconds of history," they wrote in their paper. As a result, objects like barrels and bad guys disappear and appear out of nowhere. Nonetheless, they found that the "game logic is persisted for drastically longer time horizons." "While some of the game state is persisted through screen pixels (e.g. ammo and health tallies, available weapons, etc.), the model likely learns strong heuristics that allow meaningful generalizations," the paper reads. The tech could open plenty of doors in the world of video game development, potentially lowering costs and making the developmental process more accessible. Games could even be written and edited in text format, or by feeding in AI sample images. "For example, we might be able to convert a set of frames into a new playable level or create a new character just based on example images, without having to author code," the team wrote. "Today, video games are programmed by humans," the researchers concluded. "GameNGen is a proof-of-concept for one part of a new paradigm where games are weights of a neural model, not lines of code."
[2]
This AI Model Can Simulate the PC Game Doom in Real-Time
I've been with PCMag since October 2017, covering a wide range of topics, including consumer electronics, cybersecurity, social media, networking, and gaming. Prior to working at PCMag, I was a foreign correspondent in Beijing for over five years, covering the tech scene in Asia. We've all seen how AI image generators can churn out pictures of whatever you'd like. But what if you took the same technology and applied it to generating stills for a playable game? Researchers at Google recently used this concept to develop an AI model that's capable of simulating the 1993 classic PC shooter Doom -- but without using computer code from the game itself. Instead, the researchers' model works by pumping out stills for the game like an AI image generator does, except it can do so in real-time at over 20 frames per second for a playable experience. The model is called GameNGen, and it's the subject of a new paper from researchers at Google and Tel Aviv University. "Can a neural model running in real-time simulate a complex game at high quality? In this work we demonstrate that the answer is yes," they write. "Specifically, we show that a complex video game, the iconic game Doom, can be run on a neural network." In the paper, the researchers note a computer game fundamentally works like this: the player makes an action or input, the game state updates accordingly, and then it renders the result on the screen. This so-called "game loop" creates the illusion that you're in an interactive virtual world, even though your computer is just showing you changing pictures on the screen. The researchers used Stable Diffusion version 1.4, an open-source AI image generator. They also developed a separate AI model to play the real Doom game while recording the footage for a total of 900 million frames. The resulting training data is then used by Stable Diffusion to pump out game images, adapting them as it receives inputs from the player. The team posted several clips of GameNGen rendering Doom, including footage of human players trying it out. The results show the AI model is able to accurately simulate the classic PC shooter both visually and on a gameplay level. For example, the model can simulate a door opening as the player approaches and a fireball hitting the player, taking away some health. However, GameNGen also contains some major limitations. "The model only has access to a little over 3 seconds of history," the researchers wrote. As a result, enemies and objects can sometimes pop in of nowhere and then disappear seconds later. Nevertheless, GameNGen is able to create the illusion it can remember the game world because each rendered image allows the model to infer the player's ammo, health status, weapons, and location. The other issue is that a traditional computer game can be quite complex. In addition to rendering pixels on a screen, a game can contain dialogue, numerous characters, along with story and game mechanics that can happen off-screen. But despite the limitations, the researchers say GameNGen shows how generative AI could transform game development, potentially leading to AI-created games, which Nvidia's CEO has also predicted could occur in the next five to 10 years. "For example, we might be able to convert a set of frames into a new playable level or create a new character just based on example images, without having to author code," the researchers wrote in their paper while adding: "Today, video games are programmed by humans. GameNGen is a proof-of-concept for one part of a new paradigm where games are weights of a neural model, not lines of code."
[3]
Google reaches grim AI milestone, replicates playable version of Doom entirely by algorithm
Doom has been ported to dozens of devices, but it's never been playable quite like this. Google researchers have now generated an AI version of the retro first-person shooter classic entirely via neural network, based on ingested video clips of gameplay. It's a milestone, if a grim one, recorded in a paper published this week entitled "Diffusion models are real-time game engines" (thanks, VentureBeat). This documents how a small team from Google were able to "interactively simulate" a version of Doom, with only a "slightly better than random chance" of humans being able to tell the difference. Humans are still (currently) required to play Doom first, to provide the video clips of gameplay that are then fed into GameNGen, the research team's game engine which is "powered entirely by a neural model". It's the same principle as the now-commonplace ability for AI to learn from and then generate static images, based on ingesting huge amouts of dubiously-sourced data online. GameNGen then produces sequential frames based on its learnings of 'watching' that gameplay, which are then output at 20fps with a visual quality "comparable to the original". Here's how it looks: "Can a neural model running in real-time simulate a complex game at high quality?" the paper asks. "In this work we demonstrate that the answer is yes. While not an exact simulation, the neural model is able to perform complex game state updates, such as tallying health and ammo, attacking enemies, damaging objects, opening doors, and persist the game state over long trajectories. "GameNGen answers one of the important questions on the road towards a new paradigm for game engines, one where games are automatically generated, similarly to how images and videos are generated by neural models in recent years. Key questions remain, such as how these neural game engines would be trained and how games would be effectively created in the first place, including how to best leverage human inputs. We are nevertheless extremely excited for the possibilities of this new paradigm." Earlier this year, a major Eurogamer investigation looked at how AI was already changing video game development forever - with positive and negative results. A report from Unity in March claimed that more than 60 percent of game developers were already using AI at some stage in the development process.
[4]
Google's GameNGen: AI breaks new ground by simulating Doom without a game engine
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google researchers have reached a major milestone in artificial intelligence by creating a neural network that can generate real-time gameplay for the classic shooter Doom -- without using a traditional game engine. This system, called GameNGen, marks a significant step forward in AI, producing playable gameplay at 20 frames-per-second on a single chip, with each frame predicted by a diffusion model. "We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality," the researchers state in their paper, published on the preprint server arXiv. This achievement marks the first time an AI has fully simulated a complex video game with high-quality graphics and interactivity. Running on a single Tensor Processing Unit (TPU) -- Google's custom-built AI accelerator chip -- GameNGen handles Doom's intricate 3D environments and fast-paced action with remarkable efficiency, all without the usual components of a game engine. AI game engines: A game-changer for the $200 billion gaming industry Doom has long been a technological benchmark since its 1993 release, ported to an astonishing array of platforms -- from microwaves to digital cameras. However, GameNGen transcends these earlier adaptations. Unlike traditional game engines that rely on painstakingly coded software to manage game states and render visuals, GameNGen autonomously simulates the entire game environment using an AI-driven generative diffusion model. The transition from traditional game engines to AI-driven systems like GameNGen could transform the $200 billion global gaming industry. By eliminating the need for manually programmed game logic, AI-powered engines have the potential to significantly reduce both development time and costs. This technological shift could democratize game creation, enabling smaller studios and even individual creators to produce complex, interactive experiences that were previously unimaginable. Beyond cost and time savings, AI-driven game engines could open the door to entirely new genres of games, where the environment, narrative, and gameplay mechanics dynamically evolve based on player actions. This innovation could reshape the gaming landscape, moving the industry away from a blockbuster-centric model toward a more diverse and varied ecosystem. From video games to autonomous vehicles: Broader implications of AI-driven simulations The potential applications for GameNGen extend far beyond gaming. Its capabilities suggest transformative possibilities in industries such as virtual reality, autonomous vehicles, and smart cities, where real-time simulations are essential for training, testing, and operational management. For instance, autonomous vehicles require the ability to simulate countless driving scenarios to safely navigate complex environments -- a task that an AI-driven engine like GameNGen could perform with high fidelity and real-time processing. In the realm of virtual and augmented reality, AI-driven engines could create fully immersive, interactive worlds that adapt in real-time to user inputs. This could revolutionize sectors like education, healthcare, and remote work, where interactive simulations can provide more effective and engaging experiences. The future of gaming: When AI dreams of virtual worlds While GameNGen represents a significant leap forward, it also presents challenges. Although it can run Doom at interactive speeds, more graphically intensive modern games would likely require much greater computational power. Additionally, the current system is tailored to a specific game (i.e. Doom), and developing a more general-purpose AI game engine capable of running multiple titles remains a tough challenge. Nevertheless, GameNGen is a crucial step toward a new era in game engines -- one where games are not just played by AI but also created and powered by it. As AI continues to advance, we may be on the cusp of a future where our favorite games are born not from lines of code, but from the boundless creativity of machines. This development also opens up exciting possibilities for game creation and interaction. Future games could adapt in real-time to player actions, generating new content on the fly. AI-powered game engines might also dramatically reduce development time and costs, potentially democratizing game creation. As we stand on the brink of this new era in gaming, one thing is clear: the lines between human creativity and machine intelligence are blurring, promising a future of digital entertainment we can scarcely imagine. With GameNGen, Google researchers have given us an exciting glimpse of that future -- a world where the only limit to our virtual experiences is the imagination of AI.
[5]
AI makes Doom in its own game engine -- Google's GameNGen project uses Stable Diffusion to simulate gameplay
Google Research scientists have released their paper on GameNGen, an AI-based game engine that generates original Doom gameplay on a neural network. Using Stable Diffusion, scientists Dani Valevski, Yaniv Leviathan, Moab Arar, and Shlomi Fruchter designed GameNGen to process its previous frames and the current input from the player to generate new frames in the world with surprising visual fidelity and cohesion. AI-generating a complete game engine with consistent logic is a unique achievement. GameNGen's Doom can be played like an actual video game, with turning and strafing, firing weapons, and accurate damage from enemies and environmental hazards. An actual level is built around you in real-time as you explore it. It even keeps a mostly precise tally of your pistol's ammo. According to the study, the game runs at 20 FPS and is difficult to distinguish in short clips from actual Doom gameplay. To obtain all of the training data necessary for GameNGento to accurately model its own Doom levels, the Google team trained its agent AI to play Doom at all difficulties and simulate a range of player skill levels. Actions like collecting power-ups and completing levels were rewarded. At the same time, player damage or death was punished, creating agents that could play Doom and providing hundreds of hours of visual training data for the GameNGen model to reference and recreate. A significant innovation in the study is how the scientists maintained cohesion between frames while using Stable Diffusion over long periods. Stable Diffusion is a ubiquitous generative AI model that generates images from image or text prompts and has been used for animated projects since its release in 2022. Stable Diffusion's two most significant weaknesses for animation are its lack of cohesion from frame to frame and its eventual regression in visual fidelity over time. As seen in Corridor's Anime Rock Paper Scissors short film, Stable Diffusion can create convincing still images but suffers from flickering effects as the model outputs consecutive frames (notice how the shadows seem to jump all across the faces of the actors from frame to frame). The flickering can be solved by feeding Stable Diffusion its output and training it using the image it created to ensure frames match one another. However, after several hundred frames, the image generation becomes less and less accurate, similar to the effect of photocopying a photocopy many times. Google Research solved this problem by training new frames with a more extended sequence of user inputs and frames that preceded them -- rather than just a single prompt image -- and corrupting these context frames using Gaussian noise. Now, a separate but connected neural network fixes its context frames, ensuring a constantly self-correcting image and high levels of visual stability that remain for long periods. The examples of GameNGen seen so far are, admittedly, less than perfect. Blobs and blurs pop up on-screen at random times. Dead enemies become blurry mounds after death. Doomguy on the HUD is constantly flickering his eyebrows up and down like he's The Rock on Monday Night Raw. And, of course, the levels generated are inconsistent at best; the embedded YouTube video above ends in a poison pit where Doomguy suddenly stops taking damage at 4% and completely changes its layout after turning around 360 degrees inside it. While the result is not a winnable video game, GameNGen produces an impressive simulacrum of the Doom we love. Somewhere between tech demos and thought experiments on the future of AI, Google's GameNGen will become a crucial part of future AI game development if the field continues. Paired with Caltech's research on using Minecraft to teach AI models consistent map generation, AI-baked video game engines could be coming to a computer near you sooner than we'd thought.
[6]
AI Is Hallucinating DOOM
Can it hallucinate DOOM? Google Research and Tel Aviv University have successfully simulated DOOM within a neural learning model named GameNGen. It's been a big year for the "Can It Run DOOM" scene. We got DOOM running on poop germs, for example, and a mad scientist taught a lab-grown rat brain to play DOOM. But Google Research and Tel Aviv University have flipped the script with their GameNGen project -- they aren't just running DOOM in a neural model, they're simulating DOOM without utilizing any traditional code, visual assets, or game engines. Metaphorically speaking, we now have a machine that can "think" DOOM into existence. The simulated DOOM is fully interactive and immediately recognizable. It runs in full color at 20 FPS on a single TPU (tensor processing unit), meaning that "neural game engines" like GameNGen can be relatively lightweight. While this is not the first AI simulation of DOOM, it is by far the most impressive and accurate. GameNGen's training was achieved through a two-phase process. First, a reinforcement learning model (a reward-seeking AI, kind of like a lab rat) was taught to play DOOM. Its gaming sessions were recorded and passed on to a diffusion model (an AI that's comparable to the predictive text algorithm in your smartphone keyboard), which learned to predict and generate in-game visuals. The models were not exposed to DOOM's source code or visual asset library. "A complex video game, the iconic game DOOM, can be run on a neural network (an augmented version of the open Stable Diffusion v1.4, in real-time, while achieving a visual quality comparable to that of the original game. While not an exact simulation, the neural model is able to perform complex game state updates, such as tallying health and ammo, attacking enemies, damaging objects, opening doors, and persist the game state over long trajectories." While the AI DOOM simulation is obviously very impressive, it's not perfect. Many of the "complex game state updates" simulated by the AI are affected by tell-tale visual artifacts. The health and ammo tickers at the bottom of the screen regularly flick between numbers, and movement is often subject to the kind of smudginess that we often see in generative video. Still, GameNGen runs DOOM at a better quality and frame rate than most PCs did in the mid-90s. And this is without the elegant DOOM Engine (or any conventional game engine, for that matter). Google Research also found that, when viewing short clips between 1.6 seconds and 3.2 seconds, humans had a lot of trouble differentiating the fake DOOM from the real DOOM (their success rate was 58% to 60%, depending on video length). The image is often perfect; it just fails to be consistently perfect. As for how this research will be used in the future -- it's anyone's guess. Google Research and Tel Aviv University have proven that an interactive game can run within the paradigm of a neural model. But they did not create a game from scratch. The arduous process of simulating a game within a neural model has no practical or economic benefit as of 2024. GameNGen, in its current form, is just a proof of concept. However, this research may lead to the development of a neural model that can generate unique games. If generative game development can be achieved at a lower cost than traditional game development (while also providing a fun experience for gamers), something like GameNGen could become a viable product. But training may prove to be the biggest hurdle here, as the AI would need a decent understanding of how games work (GameNGen appears to lean heavily on visual observations), and, importantly, it would need a massive dataset containing a diverse array of existing, copyrighted games. While I've tried my best to explain this research, I suggest reading the Diffusion Models Are Real-Time Game Engines whitepaper and visiting the GameNGen Github page. Source: GameNGen
[7]
AI creates a playable version of the original Doom, generating each frame in real-time
Google's research scientists have published a paper on its new GameNGen technology, an AI game engine that generates each new frame in real-time based on player input. It kind of sounds like Frame Generation gone mad in that everything is generated by AI, including visual effects, enemy movement, and more. AI generating an entire game in real-time is impressive, even more so when GameNGen uses its tech to recreate a playable version of id Software's iconic Doom. This makes sense when you realize that getting Doom to run on lo-fi devices, high-tech gadgets, and even organic material is a right of passage. Seeing it in action, you can see some of the issues when it comes to AI generating everything (random artifacts, weird animation), but it's important to remember that everything you see is being generated and built around you in real-time as you move, strafe, and fire shotgun blasts at demons. As expected, the underlying AI model was trained on Doom and played repeatedly by AI agents trained to play the game, simulating various skills and playstyles. The result is impressive, to be sure. However, the game runs at 20 FPS, so there are still latency and performance improvements before GameNGen could be considered a viable option for playing a game. What makes this an essential breakthrough for generative AI is how the image stays consistent between frames, something AI has struggled with when animating physical objects and characters. Each frame is separate without any underlying physics calculations or physical rendering. GameNGen presents a notable improvement thanks to Google Research extending the training on new frames with preceding frames and user input information. Here's the official description of what it does and how it works.
[8]
New AI model can hallucinate a game of 1993's Doom in real time
"Why write rules for software by hand when AI can just think every pixel for you?" On Tuesday, researchers from Google and Tel Aviv University unveiled GameNGen, a new AI model that can interactively simulate the classic 1993 first-person shooter game Doom in real time using AI image generation techniques borrowed from Stable Diffusion. It's a neural network system that can function as a limited game engine, potentially opening new possibilities for real-time video game synthesis in the future. For example, instead of drawing graphical video frames using traditional techniques, future games could potentially use an AI engine to "imagine" or hallucinate graphics in real time as a prediction task. Further Reading "The potential here is absurd," wrote app developer Nick Dobos in reaction to the news. "Why write complex rules for software by hand when the AI can just think every pixel for you?" GameNGen can reportedly generate new frames of Doom gameplay at over 20 frames per second using a single tensor processing unit (TPU), a type of specialized processor similar to a GPU that is optimized for machine learning tasks. In tests, the researchers say that ten human raters sometimes failed to distinguish between short clips (1.6 seconds and 3.2 seconds) of actual Doom game footage and outputs generated by GameNGen, identifying the true gameplay footage 58 percent or 60 percent of the time. Real-time video game synthesis using what might be called "neural rendering" is not a completely novel idea. Nvida CEO Jensen Huang predicted during an interview in March, perhaps somewhat boldly, that most video game graphics could be generated by AI in real time within five to ten years. GameNGen also builds on previous work in the field, cited in the GameNGen paper, that includes World Models in 2018, GameGAN in 2020, and Google's own Genie in March. And a group of University researchers trained an AI model (called "DIAMOND") to simulate vintage Atari video games using a diffusion model earlier this year. Also, ongoing research into "world models" or "world simulators," commonly associated with AI video synthesis models like Runway's Gen-3 Alpha and OpenAI's Sora, is leaning toward a similar direction. For example, during the debut of Sora, OpenAI showed demo videos of the AI generator simulating Minecraft. Diffusion is key In a preprint research paper titled "Diffusion Models Are Real-Time Game Engines," authors Dani Valevski, Yaniv Leviathan, Moab Arar, and Shlomi Fruchter explain how GameNGen works. Their system uses a modified version of Stable Diffusion 1.4, an image synthesis diffusion model released in 2022 that people use to produce AI-generated images. "Turns out the answer to "can it run DOOM?" is yes for diffusion models," wrote Stability AI Research Director Tanishq Mathew Abraham, who was not involved with the research project. While being directed by player input, the diffusion model predicts the next gaming state from previous ones after having been trained on extensive footage of Doom in action. The development of GameNGen involved a two-phase training process. Initially, the researchers trained a reinforcement learning agent to play Doom, with its gameplay sessions recorded to create an automatically generated training dataset -- that footage we mentioned. They then used that data to train the custom Stable Diffusion model. However, using Stable Diffusion introduces some graphical glitches, as the researchers note in their abstract: "The pre-trained auto-encoder of Stable Diffusion v1.4, which compresses 8x8 pixel patches into 4 latent channels, results in meaningful artifacts when predicting game frames, which affect small details and particularly the bottom bar HUD." And that's not the only challenge. Keeping the images visually clear and consistent over time (often called "temporal coherency" in the AI video space) can be a challenge. GameNGen researchers say that "interactive world simulation is more than just very fast video generation," as they write in their paper. "The requirement to condition on a stream of input actions that is only available throughout the generation breaks some assumptions of existing diffusion model architectures," including repeatedly generating new frames based on previous ones (called "autoregression"), which can lead to instability and a rapid decline in the quality of the generated world over time.
[9]
As is tradition, AI researchers got Doom running on Stable Diffusion
4 things the Settings app needs before Microsoft kills the Control Panel Key Takeaways Doom running on AI image generator by Google researchers. GameNGen project uses AI agents to play Doom and simulate gameplay. AI version shows some oddities, but impressive feat overall. If it has a screen, it can run Doom. This manta has encouraged people to run Doom on all manners of devices, from Def Con badges to lawnmowers. Now, the trend continues, as researchers from Google have gotten a playable version of the classic FPS running within an AI image generator. Researchers get Doom running on Stable Diffusion As spotted by Tom's Hardware, this project was developed by several researchers under the name GameNGen. The researchers created AI agents to play many games of Doom, rewarding them when they got pickups and killed enemies and punishing them if they got hurt or died. The researchers could then feed lots of gameplay into a Stable Diffusion model, which "memorized" what the levels looked like, where the enemies spawned, and how the game works. Once the training had been completed, people could play Doom via Stable Diffusion. The player could give the model inputs, and the AI image generation model would remember what that action meant in terms of the game and replicate them as AI images. This includes fighting demons, with Stable Diffusion simulating both taking out monsters and taking damage from them. The researchers claim that "human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation," but eagle-eyed Doom fans will see some weirdness with the AI version of the game. For example, the AI has a tough time representing the ammo count at the bottom right, with the numbers morphing into one another regularly. Also, Doom Guy's portrait either jitters uncontrollably or locks itself into one face for long periods. However, it's still an impressive feat to get a playable version of Doom running on an AI image generation mode. If you'd like to make some AI images yourself, be sure to check out the best AI image generators out there.
[10]
Google Unveils GameNGen: Integrating Generative AI into Gaming
New research from Google and Tel Aviv University allow real-time interaction with complex game environments On 27th August 2024, Google released a paper detailing the capabilities of their new model GameNGen. This research was authored by Dani Valevski (Researcher at Google Research), Yaniv Leviathan (Engineer at Google Research), Moab Arar (PhD candidate at Tel Aviv University), and Shlomi Fruchter (Engineer at Google DeepMind). GameNGen's Capabilities Powered entirely by a neural model, GameNGen is one of the pioneer search engines that allows high-quality, real-time interactions with complex environments over long periods. GameNGen can run classic DOOM at more than 20 frames per second on a single TPU. Its next-frame prediction achieves a PSNR of 29.4, akin to lossy JPEG compression. The training occurs primarily in two phases: (1) an RL agent learns to play the game, with training sessions being recorded, and (2) a diffusion model is trained to predict the next frame using the sequence of previous frames and actions. This approach, while still in its nascent stage, is testament of a new paradigm where games are weights of a neural model, not lines of code, in an era where video games are programmed by humans. GameNGen reveals that a neural model can interactively run a complex game like DOOM on existing hardware. Architecture & How It Works In order to collect data efficiently, an RL agent plays the game, and its actions and observations are recorded as training data. For more enhancement and consistency, the Stable Diffusion model is trained to predict the next game frame using previous frames and actions. Lastly, the model's decoder is fine-tuned to improve image quality and reduce visual artefacts. AI in Gaming Jack O Brien, ex Google, remarked on how this research is seminal and opens up new possibilities for the world of video generation and the use of AI. Jensen Huang, Founder of NVIDIA, has been bullish on the inevitability of the world of gaming and the role played by AI in making it a reality. "We used AI to revolutionise computer graphics. Graphics enabled AI, and now AI is saving graphics", Huang said. Last year, OpenAI's acquisition of Global Illumination, a gaming and design startup and the creators of Biomes, highlights OpenAIs gaming focus for AI simulation and use. As AI research advances, companies will increasingly test their models in game worlds before real-world deployment.
Share
Share
Copy Link
Google researchers have developed an AI model capable of simulating the classic game DOOM in real-time, without using a traditional game engine. This breakthrough demonstrates the potential of AI in game development and simulation.
In a groundbreaking development, Google researchers have created an AI model that can simulate the iconic first-person shooter game DOOM in real-time, without relying on a traditional game engine 1. This achievement marks a significant milestone in the field of artificial intelligence and game development, showcasing the potential of AI to revolutionize how games are created and experienced.
The AI model, known as GameNGen, utilizes a combination of advanced machine learning techniques, including neural networks and stable diffusion 2. By training on gameplay footage, GameNGen learns to replicate the game's visuals, physics, and mechanics, effectively creating a playable version of DOOM entirely through algorithmic processes 3.
This innovation has far-reaching implications for the gaming industry. By demonstrating that complex game environments can be simulated without traditional game engines, GameNGen opens up new possibilities for rapid prototyping, game design, and even entirely new forms of interactive entertainment 4.
While impressive, the AI-generated version of DOOM is not without its limitations. The current model struggles with certain aspects of gameplay, such as rendering text and maintaining consistent object permanence 5. These challenges highlight the complexity of fully replicating a game environment and the work that remains to be done in this field.
As AI continues to advance in the realm of game simulation, it raises questions about the future of game development and the role of human creators. While GameNGen demonstrates the potential for AI to assist in game creation, it also prompts discussions about authorship, creativity, and the ethical implications of AI-generated content in the gaming industry 3.
The technology behind GameNGen has potential applications beyond gaming. Similar AI models could be used for simulating complex environments in fields such as urban planning, scientific research, and virtual training programs 4. This versatility underscores the broader impact of AI advancements in simulation and modeling.
Reference
[2]
[3]
Artificial intelligence has successfully recreated the iconic game DOOM, marking a significant milestone in AI-driven game development. This achievement showcases the potential of AI in revolutionizing the gaming industry.
5 Sources
A new AI model has demonstrated the ability to simulate Super Mario Bros. gameplay after analyzing video footage. This breakthrough highlights the potential of AI in game development and raises questions about copyright and ethical implications.
4 Sources
Google DeepMind unveils Genie 2, an advanced AI model capable of generating interactive 3D worlds from single images or text prompts, showcasing potential applications in AI research and creative prototyping.
19 Sources
Artificial intelligence is transforming the video game industry, enabling more dynamic and interactive experiences. From generating detailed game environments to creating lifelike NPCs, AI is pushing the boundaries of what's possible in game development.
8 Sources
As generative AI makes its way into video game development, industry leaders and developers share their thoughts on its potential impact, benefits, and challenges. From enhancing NPC interactions to streamlining development processes, the integration of AI in gaming is sparking both excitement and concern.
3 Sources