Snap, the social media company behind Snapchat, has recently showcased its latest advancements in augmented reality (AR) technology at the Augmented World Expo. The company unveiled an early version of its real-time image diffusion model for AR experiences and new generative AI tools for creators.
At the event, Snap co-founder and CTO Bobby Murphy demonstrated how this on-device model is capable of generating vivid AR experiences in real time. The small yet powerful machine learning model can run on a smartphone, making it accessible to a wider audience. Snap plans to bring this technology to users in the coming months and creators by the end of the year.
Murphy emphasized that while generative AI image diffusion models have been exciting, they need to be significantly faster for them to be impactful for augmented reality. To address this challenge, Snap's teams have been working on accelerating machine learning models.
In addition to the real-time image model, Snap also introduced new generative AI tools for AR creators in Lens Studio 5.0. These tools will help creators generate custom ML models and assets for their AR effects much faster than before, saving them weeks and even months of time.
AR creators can now create selfie lenses with highly realistic ML face effects, as well as generate custom stylization effects that apply a transformation over the user's face, body, and surroundings in real time. They can also generate 3D assets based on a text or image prompt within minutes and include them in their Lenses.
Moreover, creators can now generate characters like aliens or wizards using the Face Mesh technology with a text or image prompt. They can also create face masks, texture and materials within minutes. The latest version of Lens Studio also includes an AI assistant that can answer questions for AR creators.
Snap's advancements in AR technology are expected to revolutionize the way users interact with their digital environment, making it more engaging and immersive.