Google I/O 2025: Live updates on Gemini, Android XR, Android 16 updates and more – Engadget

Google I/O has kicked off in Mountain View, California, and the initial keynote provided a laundry list of big AI announcements. A parade of Google execs starting with CEO Sundar Pichai took the stage to detail how the search giant is weaving its Gemini artificial intelligence model throughout its entire catalog of online services, as the company continues to vie for supremacy in the AI space with rivals like OpenAI, Microsoft and a host of others. Among the big announcements today: A new AI-powered movie creator called Flow; Virtual clothing try-ons enabled by photo uploads; real-time AI-powered translation is coming to Google Meet; Project Astra's computer vision features are getting considerable more knowledgeable; Google's Veo image and video generation options are getting a big upgrade and Google shared a live demo of real-time translation via its XR glasses, too. See everything announced at Google I/O for a full recap.
The initial keynote has ended, but there will be a developer-centric keynote thereafter (4:30PM ET / 1:30PM PT). If you want a recap of the event as it happened, scroll down to read our full liveblog, anchored by our on-the-ground reporter Karissa Bell and backed up by off-site members of the Engadget staff.
You can also rewatch Google's keynote in the embedded video above or on the company's YouTube channel, too. Note that the company plans to hold breakout sessions through May 21 on a variety of different topics relevant to developers.
Stick around for Karissa’s hands-on impressions of some of the things Google announced today, by the way, including those new XR Glasses!
Thanks to everyone who tuned in for this wild AI ride with us.
We have more stories about everything Google announced today on Engadget, so please check us out there.
As part of its announcements for I/O 2025 today, Google shared details on some new features that would make shopping in AI Mode more novel. It’s describing the three new tools as being part of its new shopping experience in AI Mode, and they cover the discovery, trying on and checkout parts of the process. These will be available “in the coming months” for online shoppers in the US.
The first update is when you’re looking for a specific thing to buy. The examples Google shared were searches for travel bags or a rug that matches the other furniture in a room. By combining Gemini’s reasoning capabilities with its shopping graph database of products, Google AI will determine from your query that you’d like lots of pictures to look at and pull up a new image-laden panel.
Read more: Google’s AI Mode lets you virtually try clothes on by uploading a single photo
Ok. Now that the AI deluge has slowed down. I can see why people are excited for this technology. But at the same time, it almost feels like Google is throwing features against a wall to see what sticks. There is A LOT going on, and trying to cover a million different applications in a two-hour presentation gets messy fast.
Aaaaaand it looks like that’s it. Whew!
Wifi has been slipping here since that AR glasses demo so I’m a bit late here, but it’s worth lingering on that for a moment. It’s been more than a decade since Google first tried augmented reality glasses with Google Glass. The tech (and our perception of it) has advanced so much since then, it’s great to see Google trying something so ambitious. So far this demo feels on par with what Meta showed with its Orion prototype last year (with a few bumps.)
Now Pichai is recounting a story about going in a driverless Waymo with his father and remembering how powerful this technology will be.
With the show wrapping, what did we think of the announcements Google made today?
Pichai says Google is building something new called FireSat. It’s designed to more closely monitor fire breakouts, which is especially important to folks in places like California.
As part of this year’s announcements at its I/O developer conference, Google has revealed its latest media generation models. Most notable, perhaps, is the Veo 3, which is the first iteration of the model that can generate videos with sounds. It can, for instance, create a video of birds with an audio of their singing, or a city street with the sounds of traffic in the background. Google says Veo 3 also excels in real-world physics and in lip syncing. At the moment, the model is primarily available for Gemini Ultra subscribers in the US within the Gemini app and for enterprise users on Vertex AI. (It’s also available in Flow, Google’s new AI filmmaking tool.)
Read more: Google’s Veo 3 AI model can generate videos with sound
Sundar Pichai is back, ostensibly to wrap up the show.
Apparently Gemini got mentioned more times than AI itself during the keynote.
Google says its working hard to create a platform for these types of glasses, with retail devices available as early as later this year.
Gentle Monster and Warby Parker are slated to be the first glasses makers to build something on the Android XR platform.
It seems Google has tempted the live demo gods one too many times, because one pair of the glasses got confused and thought Izadi was speaking Hindi.
One of the new AI features that Google has announced for Search at I/O 2025 will let you discuss what it’s seeing through your camera in real time. Google says more than 1.5 billion people use visual search on Google Lens, and it’s now taking the next step in multimodality by bringing Project Astra’s live capabilities into Search. With the new feature called Search Live, you can have a back-and-forth conversation with Search about what’s in front of you. For instance, you can simply point your camera at a difficult math problem and ask it to help you solve it or to explain a concept you’re having difficulty grasping.
Read more: Google Search Live will let you ask questions about what your camera sees
Now Izadi is doing what he calls a risky demo by attempting to speak in Farsi to another presenter in real time with translation.
Of course, the glasses can take photos too.
Izadi reveals that he’s even using his glasses as a personal teleprompter during the keynote.
We first saw these glasses in a demo Google showed off last year during the Project Astra showcase.
The glasses have the ability to respond to voice commands while also highlighting notifications for things like reminders. It incorporates some Project Astra-like features to remember things that you’ve seen. It can even populate heads-up turn-by-turn directions to specific locations.
Google is showing how an unnamed pair of Android XR glasses can respond to voice requests to mute sounds.
Apparently Giannis of the Milwaukee Bucks is backstage wearing these glasses too.
Google is showing some AR glasses that look pretty similar to Meta’s Orion (maybe a tad less chunky). Pretty unclear from this demo so far what the field of view might be, it looks like it might be fairly small.
Of course, dating back to the old Google Glass, Android XR will be available on lightweight smart glasses as well.
Izadi says “It’s a natural form factor for Android XR
Made by Samsung, Project Moohan is set to be the first Android XR device.
It allows you to talk to Gemini about anything you can see, both in the real world or in a virtual screen.
Project Moohan is slated to go on sale later this year.
Google has started testing a reasoning model called Deep Think for Gemini 2.5 Pro, the company has revealed at its I/O developer conference. According to DeepMind CEO Demis Hassabis, Gemini Deep Think uses “cutting-edge research” that gives the model the capability to consider multiple hypotheses before responding to queries.
Google says it got an “impressive score” when evaluated using questions from the 2025 United States of America Mathematical Olympiad competition. However, Google wants to take more time to conduct safety evaluations and get further input from safety experts before releasing it widely. That’s why it’s making Deep Think initially available to trusted testers via the Gemini API first in order to get their feedback first.
Read more: Google introduces the Deep Think reasoning model for Gemini 2.5 Pro and a better 2.5 Flash
Google says its reimagining all of its most popular apps for XR.
But the most exciting one is Android XR.
It’s meant to work on both headsets and smart glasses.
And finally we’re getting to Android XR! Interesting that Google is specifically calling out AR Glasses and smart glasses w/ AI capabilities (a la Meta). IMO glasses have been the best thing for XR, headsets are just so limited.
Izadia is reiterating how many of these new Gemini features are coming soon to a wide range of devices including smartwatches, phones and cars.
This is a good reminder that all this AI is very resource-intensive (and expensive) so Google has some new plans for everyone who wants access to more prompts. Notably, not a word yet today about the environmental impact of all this.
Next up is Shaham Izadi.
For context, $250 is $50 more than either OpenAI or Anthropic charge for their respective top-tier plans.
What do we think of the $250 price for the Ultra plan?
$250 a month for Google AI Ultra is A LOT.
Now comes pricing for some of Google’s various AI plans including a new Ultra tier.
Google is enhancing Gemini’s text-to-speech (TTS). On Tuesday at Google I/O 2025, the company previewed a new TTS feature, built on native audio output, that can “converse in more expressive ways.”
Google’s Tulsee Doshi showed a quick demo of the Gemini 2.5 TTS models onstage in Mountain View. It showed an AI-powered voice that sounds more natural and less robotic, with subtler nuances.
The TTS can converse in over 24 languages, switching between them seamlessly.
Read more: Google’s new text-to-speech can switch languages on the fly
If I remember, didn’t we see some AI music video making thing last year? Don’t think we ever got an update on that.
Now Google is presenting testimonies from some AI filmmakers highlighting the potential of its tools.
I’m curious to know what the generation limit of Flow is; most video generation models are limited to about 60 seconds of footage at most.
Update, May 19 2025, 1:01PM ET: This story has been updated to include details on the developer keynote taking place later in the day, as well as tweak wording throughout for accuracy with the new timestamp.
Update, May 20 2025, 9:45AM ET: This story has been updated to include a liveblog of the event.
Update, May 20 2025, 2:08PM ET: This story has been updated to include an initial set of headlines coming out of the I/O keynote.
Update, May 20 2025, 4:20PM ET: Added additional headlines and links based on the flow of news from the initial keynote.
Google's annual I/O developer conference kicked off on Tuesday, May 20. See everything Google has announced at I/O 2025 so far, including an AI-powered movie creation tool called Flow, real-time translation in Google Meet, virtual clothing try-ons based on uploaded photos, AI enhancements to Project Astra computer vision and more. Follow Engadget's Google I/O liveblog for recap of the event as it unfolded in real-time. Google previewed some key pre-I/O Android 16 news during its Android Show video stream last week.
Subscribe to our newsletter:
– A twice-weekly dose of the news you need
Please enter a valid email address
Please select a newsletter
By subscribing, you are agreeing to Engadget's Terms and Privacy Policy.

source