Meta AI, Meta's AI assistant, is getting a new voice mode.
At its Meta Connect 2024 developer conference in Menlo Park on Wednesday morning, Meta announced that you can now voice-answer questions across the platforms where its Meta AI is available, including Instagram, Messenger, WhatsApp, and Facebook. You can choose from multiple voices, including celebrity AI clones that Meta hired for the purpose (Awkwafina, Judi Dench, John Cena, Keegan-Michael Key, and Kristen Bell).
The new Meta AI voice feature isn't like OpenAI's Advanced Voice Mode for ChatGPT, which is highly expressive and can pick up the emotive tones of a person's voice. Rather, it's more like Google's recently released Gemini Live, which transcribes audio before the AI responds to it and then uses a synthetic voice to read the answer.
Image credit: Meta
Meta is betting that big-name talent will drive change — the Wall Street Journal reports that the company has paid millions of dollars to use celebrities' likenesses — and we're skeptical, but we'll reserve judgment until we try it out for ourselves.
Other updates to Meta AI include an upgrade to the underlying AI models that power the experience, allowing the Assistant to analyze images. In supported regions, for example, you can now share a photo of a flower you see and ask Meta AI what kind of flower it is. Or you can upload a photo of a dish and ask it how to make it (though it may get the wrong answer).
Meta said it is also testing its Meta AI translation tool to automatically translate audio for Instagram Reels, dubbing creators' speech and automatically lip-syncing it to simulate audio in another language with matching lip movements.
Meta says it's starting a “small test” of Reels translations on Instagram and Facebook, and that for now, in the U.S., it will only feature videos from select Latin American creators in English and Spanish.