
Google has offered a glimpse into the future of wearable AI, showcasing its latest experiment, AI Glasses powered by Gemini, during a live demonstration at a recent TED Talk. The Mountain View-based company also hinted at significant upcoming upgrades to its Gemini Live voice assistant feature, further expanding its AI ecosystem beyond smartphones and desktops.
During the TED Talk, Shahram Izadi, Vice President and General Manager of Android XR at Google, presented what could be the company’s most advanced wearable prototype yet. The new AI Glasses, resembling standard prescription eyewear, come equipped with camera sensors, speakers, and a discreet display interface. Powered by Google’s Gemini AI, the glasses can see what the user sees and respond to real-time queries, like composing a haiku inspired by the facial expressions of a crowd.
The presentation also demonstrated a memory function first introduced with Project Astra, allowing Gemini to “remember” objects and scenes even after they’re no longer in view. According to Google, this visual memory can last up to 10 minutes, enabling more advanced contextual assistance.
Google had previously teased the concept of XR (Extended Reality) glasses in December 2024, built in collaboration with Samsung. “Created in collaboration with Samsung, Android XR combines years of investment in AI, AR and VR to bring helpful experiences to headsets and glasses,” the company stated.
In a separate interview with 60 Minutes, Demis Hassabis, CEO of Google DeepMind, revealed that Gemini’s memory capabilities may soon be integrated into Gemini Live, a real-time, two-way voice interaction tool already capable of responding to live video feeds. Currently, Gemini Live lacks the ability to retain contextual memory, but that may change soon.
Hassabis also suggested that future updates could introduce social responsiveness features, like Gemini offering a personalised greeting when the feature is switched on.
The AI Glasses, while still in the prototype stage, also appear capable of executing more complex tasks beyond answering questions. According to early demonstrations, users could potentially use them to perform online transactions or access deeper layers of AI interaction.
While Google has yet to announce a public release timeline, these developments signal its renewed ambition in the AI wearable space, an area it first explored with Google Glass over a decade ago.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine