Google is taking a big step forward in the world of AI with its latest update to Gemini Live. The tech giant has introduced some exciting new features that let Gemini analyze your smartphone screen or camera feed in real-time. This move has been in the works since Google’s ‘Project Astra’ demo almost a year ago, and now it’s finally here.
These features are being gradually made available to some Google One AI Premium subscribers. A Reddit user, whose experience was highlighted by 9to5Google, shared a video showing Gemini’s impressive screen-reading abilities on their Xiaomi device.
So, what exactly are these new features? Well, there are two main ones. First, there’s the ability for Gemini to instantly interpret what’s on your screen and answer any related questions you might have. This could be a game-changer for multitasking on your phone. The second feature is live video interpretation. Imagine pointing your camera at something and getting instant feedback—like when you’re deciding on paint colors for a pottery project.
Google’s Gemini is certainly setting the pace in the AI assistant race. While Amazon and Apple are working hard on their own versions—Amazon’s Alexa Plus is almost ready, and Apple’s updated Siri is facing some delays—Gemini is already here and making waves. Even with Samsung’s Bixby around, Gemini remains the go-to assistant on Samsung smartphones, which says a lot about Google’s strategic moves in the AI space.