Google is slowly peeling back the curtain on its vision of one day selling glasses with augmented reality and multimodal AI capabilities. However, the company's plans for these glasses are still vague.
To date, we've seen multiple demos of Project Astra, DeepMind's effort to build real-time multimodal apps and agents using AI, running on mysterious prototype glasses. Ta. Google announced Wednesday that it will release these prototype glasses, equipped with AI and AR features, to a select number of users for real-world testing.
Google announced Thursday that its Project Astra prototype glasses run on Android XR, Google's new operating system for vision-based computing. Now, hardware manufacturers and developers are starting to be able to build a wide variety of glasses, headsets, and experiences around this operating system.
A demo of the translation feature in Google's prototype glasses. Image credit: Google
Although these glasses look cool, it's important to remember that they are essentially vaporware. Google has not yet revealed anything concrete about the actual product or when it will be released. However, the company appears to be hoping to eventually release smart glasses and headsets, calling them “next generation computing” in a press release. Currently, Google is building Project Astra and Android XR so that these glasses will one day become actual products.
Google also says its prototype glasses use Project Astra and AR technology to translate posters in front of you, remember where you left things in your house, and read text without taking out your phone. They shared a new demo that shows how to do it.
“Glasses are one of the most powerful form factors because they're hands-free, and they're easily accessible wearables. No matter where you go, what you see is what you see,” says Google at its Mountain View headquarters. said DeepMind Product Lead Bibo Xu in an interview with TechCrunch. “It's perfect for Astra.”
A close-up of Google's prototype glasses. Image credit: Google
A Google spokesperson told TechCrunch that there are no plans for a consumer release of the prototype, and that the company has a lot to say about the AR technology in the glasses, how much they cost, and how it all works in practice. He said he did not share details.
But Google at least shared its vision for AR and AI glasses in a press release Thursday.
Android XR will also support glasses for all-day help in the future. We want to give you plenty of options for stylish and comfortable glasses that you'll want to wear every day, and that work seamlessly with your other Android devices. With Glass, powered by Android XR, Gemini's features are just a tap away, giving you access to directions, translations, message summaries, and more when you need them, without having to reach for your phone. is provided. Everything is within your line of sight or directly in your ear.
Demo of Google's prototype glasses. Image credit: Google
In recent months, a number of tech companies have shared similar lofty visions for AR glasses. Meta recently showed off prototype Orion AR glasses, which also do not have a consumer release date. Snap's Spectacles is available for purchase by developers, but it's also not yet a consumer product.
But where Google seems to have an edge over all its competitors is Project Astra, which will soon be released as an app to a small number of beta testers. Earlier this week, I had a chance to try out the multimodal AI agent as a phone app rather than glasses. Although it is not currently available for general consumers, we have been able to confirm that it works fairly well.
I walked around the library on Google's campus, talking to Astra and pointing my phone's camera at various objects. The agent was able to process my audio and video simultaneously, asking questions about what I was seeing and getting answers in real time. As I pinged from book cover to book cover, Astra quickly gave me an overview of the author and book I was looking at.
Project Astra works by streaming photos of your surroundings at one frame per second to an AI model for real-time processing. While that's happening, it also processes your voice when you're speaking. Google DeepMind says it doesn't train its models based on collected user data, but the AI model does remember its surroundings and conversations for up to 10 minutes. This allows the AI to see what the user has seen and said before.
Some members of Google DeepMind showed us how Astra can read your phone's screen in the same way that it understands what you see through your phone's camera. The AI quickly summarized AirBnB listings, used Google Maps to show you nearby destinations, and performed Google searches based on what you saw on your phone screen.
Using Project Astra on a phone is impressive and perhaps indicative of what's to come with AI apps. OpenAI also demonstrated the vision capabilities of GPT-4o. This is similar to Project Astra and is teased to be released soon. These apps have the potential to make AI assistants even more useful by providing them with functionality that goes far beyond the realm of text chat.
Using Project Astra on your phone, it's clear that the AI model is perfect for glasses. Google seems to have had the same idea, but it may take some time to realize it.