Snap Unveils Real-Time AR Image Diffusion Model and Faster Generative AI Tools for Creators

Santa Monica, California United States of America
Creators can generate custom ML models and assets quickly
Faster machine learning models for AR technology
New generative AI tools for creators in Lens Studio 5.0
Real-time ML models run on smartphones
Snap unveils real-time image diffusion model for AR experiences
Snap Unveils Real-Time AR Image Diffusion Model and Faster Generative AI Tools for Creators

Snap, the social media company behind Snapchat, has recently showcased its latest advancements in augmented reality (AR) technology at the Augmented World Expo. The company unveiled an early version of its real-time image diffusion model for AR experiences and new generative AI tools for creators.

At the event, Snap co-founder and CTO Bobby Murphy demonstrated how this on-device model is capable of generating vivid AR experiences in real time. The small yet powerful machine learning model can run on a smartphone, making it accessible to a wider audience. Snap plans to bring this technology to users in the coming months and creators by the end of the year.

Murphy emphasized that while generative AI image diffusion models have been exciting, they need to be significantly faster for them to be impactful for augmented reality. To address this challenge, Snap's teams have been working on accelerating machine learning models.

In addition to the real-time image model, Snap also introduced new generative AI tools for AR creators in Lens Studio 5.0. These tools will help creators generate custom ML models and assets for their AR effects much faster than before, saving them weeks and even months of time.

AR creators can now create selfie lenses with highly realistic ML face effects, as well as generate custom stylization effects that apply a transformation over the user's face, body, and surroundings in real time. They can also generate 3D assets based on a text or image prompt within minutes and include them in their Lenses.

Moreover, creators can now generate characters like aliens or wizards using the Face Mesh technology with a text or image prompt. They can also create face masks, texture and materials within minutes. The latest version of Lens Studio also includes an AI assistant that can answer questions for AR creators.

Snap's advancements in AR technology are expected to revolutionize the way users interact with their digital environment, making it more engaging and immersive.



Confidence

100%

No Doubts Found At Time Of Publication

Sources

99%

  • Unique Points
    • Snapchat is offering a preview of its new on-device AI model that can transform user surroundings with AR.
    • The new model will enable creators to turn a text prompt into a custom lens.
    • Users will start seeing lenses using this new model in the coming months, while creators can start making lenses by the end of this year.
  • Accuracy
    • Snap plans to bring this generative model to users in coming months and creators by the end of the year.
    • AR creators can generate selfie Lenses with highly realistic ML face effects, custom stylization effects, and create 3D assets in minutes.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

100%

  • Unique Points
    • Snap showcased an early version of its real-time, on-device image diffusion model for AR experiences at the Augmented World Expo.
    • The model is small enough to run on a smartphone and fast enough to re-render frames in real time.
    • Snap plans to bring this generative model to users in coming months and creators by the end of the year.
    • Lens Studio 5.0 launched for developers with new generative AI tools for creating AR effects faster.
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

98%

  • Unique Points
    • Snap launched new iteration of generative AI technology for advanced augmented reality (AR)
    • AR developers can now create AI-powered lenses for Snapchat users
    • Snap announced upgraded version of Lens Studio for artists and developers to create AR features easily with reduced time from weeks to hours
    • New suite of generative AI tools in Lens Studio includes an AI assistant and a tool that generates 3D images from prompts, removing the need to develop a 3D model from scratch
    • Snap plans to create full body AR experiences
  • Accuracy
    • Users will start seeing lenses using this new model in the coming months, while creators can start making lenses by the end of this year.
    • Creators can also generate characters like aliens or wizards using Face Mesh technology and text or image prompts.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (95%)
    The article contains some inflammatory rhetoric and an appeal to authority, but no formal or dichotomous fallacies are present. The author states that Snap is 'betting' on making more advanced special effects to attract new users and advertisers, implying that this is a risky move. This is an example of inflammatory rhetoric as it creates a sense of urgency and importance around the company's actions. The author also quotes Bobby Murphy, Snap's chief technology officer, stating that the enhanced Lens Studio will reduce development time and produce more complex work. This is an example of an appeal to authority as Murphy is positioned as an expert in the field and his opinion lends credibility to the claims made in the article.
    • Snap is betting on making more advanced special effects, called lenses, to attract new users and advertisers to Snapchat.
    • Bobby Murphy, Snap’s chief technology officer, said the enhanced Lens Studio would reduce the time it takes to create AR effects from weeks to hours and produce more complex work.
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

99%

  • Unique Points
    • Snap is introducing new AR experiences powered by GenAI for Snapchatters and AR developer community.
    • Real-time image model in Snap’s AR can generate vivid experiences based on user’s imagination.
    • GenAI models are now optimized for faster, more performant techniques on mobile devices.
    • New GenAI Suite introduced in Lens Studio enables AR creators to generate custom ML models and assets for their Lenses.
    • London’s National Portrait Gallery collaborated with Snap to create portrait-style Lenses using the GenAI Suite.
  • Accuracy
    • Snapchat is offering a preview of its new on-device AI model that can transform user surroundings with AR.
    • Snap plans to bring this generative model to users in coming months and creators by the end of the year.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (0%)
    None Found At Time Of Publication