ARTICLE AD
Meta’s AI assistant, Meta AI, is getting a new voice mode of sorts.
At the Meta Connect 2024 developer conference in Menlo Park on Wednesday morning, Meta announced that Meta AI can now respond back to questions out loud across platforms where it’s available: Instagram, Messenger, WhatsApp, and Facebook. You can choose from several voices including the AI clones of celebrities that Meta hired for the purpose: Awkwafina, Dame Judi Dench, John Cena, Keegan-Michael Key, and Kristen Bell.
The new Meta AI voice feature isn’t like OpenAI’s Advanced Voice Mode for ChatGPT, which is highly expressive and can pick up on emotive tones in a person’s voice. Rather, it’s akin to Google’s recently launched Gemini Live, which transcribes speech before having an AI answer it and reading the answer aloud using a synthetic voice.
Image Credits: MetaMeta’s betting the high-profile talent will make a difference; according to The Wall Street Journal, it paid millions for the use of the celebrity likenesses. Color us skeptical, but we’ll reserve judgement until we get to try it ourselves.
In other Meta AI updates, the assistant can now analyze images thanks to an upgrade to the underlying AI models that power the experience. In regions where it’s supported, you can, for example, share a picture of a flower you see and ask Meta AI which type it is. Or you can upload a photo of a dish and request instructions on how to make it. (Bear in mind you’ll occasionally wrong answers.)
Meta also says it’s piloting a Meta AI translation tool to automatically translate voices in Instagram Reels. The tool dubs a creator’s speech and auto-lip-syncs it, simulating the voice in another language and making sure the lip movements match.
Meta says that it’s starting with “small tests” of Reels translations on Instagram and Facebook, only with some creators’ videos from Latin America in the U.S. in English and Spanish for now.