Your premier source for technology news, insights, and analysis. Covering the latest in AI, startups, cybersecurity, and innovation.
Get the latest technology updates delivered straight to your inbox.
Send us a tip using our anonymous form.
Reach out to us on any subject.
© 2026 The Tech Buzz. All rights reserved.
Google DeepMind Unleashes Project Genie to AI Ultra Subscribers
Project Genie lets users create infinite AI-generated interactive worlds in real time
PUBLISHED: Thu, Jan 29, 2026, 5:37 PM UTC | UPDATED: Sat, Jan 31, 2026, 8:20 AM UTC
5 mins read
Google DeepMind launches Project Genie for Google AI Ultra subscribers in the U.S., enabling AI-generated interactive worlds
Powered by Genie 3 world model, generating environments in real time as users navigate and interact
Features include world sketching with text/images, real-time exploration, and remixing of existing creations
Marks shift from static AI content generation to dynamic, physics-simulating environments with AGI implications
Google just cracked open access to one of its most ambitious AI experiments yet. Starting today, Google DeepMind is rolling out Project Genie to Google AI Ultra subscribers in the U.S., giving paying customers the ability to create, explore, and remix interactive worlds using nothing but text prompts and images. Powered by the Genie 3 world model previewed last August, the prototype represents a major leap in generative AI's evolution from static content creation to dynamic, navigable environments that generate in real time as you move through them.
Google is making a bold bet that the future of AI isn't just about generating images or text, but entire interactive universes you can step inside. Project Genie, now available to Google AI Ultra subscribers in the U.S., is the company's experimental research prototype that transforms world-building from a developer's domain into something anyone can do with a few words and a sketch.
The launch represents the next evolution of Genie 3, Google DeepMind's general-purpose world model that's been impressing trusted testers since its August preview. Unlike static 3D environments or pre-rendered game worlds, Genie 3 generates the path ahead in real time as you move and interact. "Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it," Product Manager Diego Rivas wrote in today's announcement.
The technology addresses a fundamental challenge in building artificial general intelligence: creating systems that can navigate the messy, unpredictable diversity of the real world. While Google DeepMind has built specialized agents for constrained environments like , Genie 3 simulates physics and interactions for dynamic worlds with what the company calls "breakthrough consistency." That consistency enables simulation of any real-world scenario, from robotics testing to historical recreations to pure fiction.
Project Genie's web app interface centers on three core capabilities that transform how users interact with AI-generated content. World Sketching lets you prompt with text and uploaded images to create living, expanding environments. Integration with Nano Banana Pro gives users fine-grained control, allowing them to preview and modify worlds before jumping in. You can define your character, choose your mode of exploration – walking, riding, flying, driving – and select your perspective, from first-person to third-person views.
World Exploration turns those created environments into navigable spaces that expand as you move through them. The system generates terrain, objects, and interactions on the fly based on your actions, with adjustable camera controls as you traverse the landscape. It's a departure from traditional game design, where every asset and interaction must be hand-crafted or procedurally generated from fixed rulesets.
The third pillar, World Remixing, introduces a collaborative creative element. Users can build on existing worlds by modifying their prompts, explore curated creations in the gallery, or hit the randomizer for inspiration. Finished explorations can be downloaded as videos, creating shareable artifacts from what are essentially infinite, procedurally generated experiences.
But Google isn't overselling the technology's current state. The company acknowledges several limitations in detailed documentation: generated worlds might not look completely realistic or adhere closely to prompts and real-world physics, characters can be less controllable with higher latency, and generations are capped at 60 seconds. Some Genie 3 capabilities announced in August, like promptable events that dynamically change worlds as you explore, aren't yet included in the prototype.
The move to bring Project Genie to paying subscribers rather than researchers or developers signals Google's confidence in the technology's consumer appeal, despite its experimental status. By limiting initial access to Google AI Ultra subscribers – users already paying for the company's most advanced AI features – the company gets real-world usage data from motivated early adopters while managing computational costs and expectations.
The timing puts Google DeepMind ahead of competitors in the race to productize world models. While companies like OpenAI and Meta have demonstrated impressive video generation capabilities, interactive, navigable environments represent a significant technical leap. The applications extend far beyond entertainment: robotics training, architectural visualization, educational simulations, and creative storytelling all become more accessible when you can generate and explore 3D spaces through natural language.
Access begins rolling out today to eligible U.S. subscribers aged 18 and up, with plans to expand to additional territories. Google says its goal is to eventually make these experiences and underlying technology accessible to more users, though no timeline was provided for broader availability or pricing changes.
For now, Project Genie represents Google's most tangible demonstration yet of how world models could reshape creative workflows and human-computer interaction. Whether users will actually want to spend time creating and exploring AI-generated worlds remains an open question, but the company is betting that infinite, interactive possibilities will prove more compelling than static outputs.
Project Genie marks a pivotal moment in generative AI's evolution from creating static outputs to building dynamic, explorable universes. By putting world model technology in the hands of paying subscribers rather than keeping it locked in research labs, Google DeepMind is accelerating the feedback loop between cutting-edge AI research and real-world usage. The limitations are clear and acknowledged, but the potential applications – from robotics training to creative storytelling to educational simulations – suggest we're witnessing the early stages of how humans will interact with AI-generated environments. As access expands beyond U.S. subscribers and the technology matures, the question isn't whether world models will reshape creative workflows, but how quickly the rest of the industry will catch up to what Google just made available today.
Jan 30
Jan 30
Jan 30
Jan 30
Jan 30
Jan 30
AI Search

