Snap showed off an early version of a real-time, on-device image diffusion model that can generate vivid AR experiences at the Augmented World Expo on Tuesday, where the company also unveiled generative AI tools for AR creators.
Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time as dictated by text prompts.
Murphy said the emergence of generative AI image diffusion models is exciting, but these models need to be significantly faster to be effective for augmented reality, which is why his team is working to speed up machine learning models.
Snapchatters will start seeing Lenses using this generative model in the coming months, and Snap plans to roll it out to creators by the end of the year.
Image credit: Snap
“This model, and future ML models generated in real time on devices, point to new directions for augmented reality and give us the space to rethink how we holistically imagine rendering and creating AR experiences,” Murphy said.
Murphy also announced that Lens Studio 5.0 was released today for developers, giving them access to new generative AI tools that will enable them to create AR effects much faster than they can today, saving them weeks or even months.
AR creators can generate highly realistic ML face effects to create selfie lenses, as well as custom stylized effects that apply realistic transformations to a user's face, body, and surroundings in real time. Creators can also generate 3D assets in minutes and include them in their lenses.
Additionally, AR creators can use the company's Face Mesh technology to generate characters such as aliens or wizards with text or image prompts, as well as generate face masks, textures and materials within minutes.
The latest version of Lens Studio also includes an AI assistant that can answer AR creators' questions.