Microsoft Updates Designer AI to Prevent Non-Consensual Photos in Deepfakes

Baltimore, Maryland, United States United States of America
Microsoft has updated its free AI software, Designer to prevent the use of non-consensual photos in deepfakes.
The update was prompted by the tool being linked to creating sexually explicit deepfake images of Taylor Swift which circulated on social media last week and raised concerns about a lawsuit by the singer.
Microsoft Updates Designer AI to Prevent Non-Consensual Photos in Deepfakes

Microsoft has recently updated its free AI software, Designer – a text-to-image program powered by OpenAI’s Dall-E 3 ⱖa guardrails that will prevent the use of non-consensual photos. The update was prompted by the tool being linked to creating sexually explicit deepfake images of Taylor Swift, which circulated on social media last week and raised concerns about a lawsuit by the singer.

The fake images were traced back to Microsoft’s Designer AI before they began circulating on X, Reddit and other websites. The company has stated that it will continue to be vigilant for attempts to spread this content and remove it wherever found.



Confidence

80%

Doubts
  • It is not clear if this update will be effective in preventing all non-consensual deepfakes.

Sources

75%

  • Unique Points
    • Microsoft cracked down on the use of its free AI software after it was linked to creating sexually explicit deepfake images of Taylor Swift.
    • Any Designer users who create deepfakes will lose access to the service per Microsoft's Code of Conduct.
  • Accuracy
    • The fake photos were traced back to Microsoft's Designer AI before they began circulating on social media.
  • Deception (50%)
    Microsoft has updated its free AI software to add guardrails that will prevent the use of non-consensual photos. The update was prompted by a deepfake scandal involving Taylor Swift's images. Microsoft is taking appropriate action against users who create deepfakes and stripping its Designer tool from being able to produce AI-generated nude images after fake, explicit photos circulated on social media in an apparent reference to her relationship with Travis Kelce.
    • Microsoft has updated its free AI software called Designer that adds guardrails that will prevent the use of non-consensual photos. The update was prompted by a deepfake scandal involving Taylor Swift's images.
  • Fallacies (85%)
    The article contains an appeal to authority fallacy by citing Microsoft's CEO Satya Nadella as a source. The author also uses inflammatory rhetoric when describing the deepfakes scandal and its consequences for Taylor Swift. Additionally, there is no evidence of any formal or informal dichotomous depictions in the article.
    • Bias (85%)
      The article discusses the use of Microsoft's free AI software in creating deepfake images. The author mentions that these images were traced back to Microsoft's Designer AI before they began circulating on social media. This suggests a potential link between the company and the creation of harmful content, which could be seen as biased.
      • Microsoft cracked down on the use of its free AI software after it was linked to creating deepfake images.
      • Site Conflicts Of Interest (50%)
        Shannon Thaler has a conflict of interest with Taylor Swift as she is reporting on the deepfakes scandal involving her. Additionally, Shannon Thaler may have financial ties to Microsoft if they are mentioned in the article.
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        74%

        • Unique Points
          • TechSocial MediaX Reactivates Search Function for Taylor Swift After Surge of Deepfakes Spurred Crackdown
          • Explicit artificial intelligence-generated images of Taylor Swift amassed tens of millions of views on X, the website formerly known as Twitter
          • U.S. President Joe Biden was also the victim of a fake audio clip spreading online, created with the help of widely available AI tools
        • Accuracy
          • Taylor Swift
          • X Restores Search for Taylor Swift After Deepfakes Spurred Temporary Blocking
        • Deception (30%)
          The article is deceptive in several ways. Firstly, the title implies that Taylor Swift's search was temporarily blocked due to deepfakes when in fact it was disabled by X as a response to a flood of explicit deepfake images. Secondly, the author states that 'X has reactivated the ability to search its social network for musician Taylor Swift', which is not entirely accurate. The article does not mention if X had previously blocked searches for Taylor Swift or if this was the first time it happened.
          • The author states that 'X has reactivated the ability to search its social network for musician Taylor Swift', which is not entirely accurate. The article does not mention if X had previously blocked searches for Taylor Swift or if this was the first time it happened.
          • The title implies that Taylor Swift's search was temporarily blocked due to deepfakes when in fact it was disabled by X as a response to a flood of explicit deepfake images.
        • Fallacies (85%)
          The article contains an appeal to authority fallacy by stating that Elon Musk's X has reactivated the ability to search its social network for musician Taylor Swift. The author does not provide any evidence or reasoning behind this claim.
          • Bias (75%)
            The article is biased towards Taylor Swift and presents her as a victim of deepfakes. The author uses language that dehumanizes the people who created and spread these images, such as referring to them as 'victims' of AI-generated content. Additionally, the article only mentions two examples of deepfake technology being used against high-profile individuals (Taylor Swift and Joe Biden), which could be seen as an attempt to downplay the severity or prevalence of this issue.
            • Site Conflicts Of Interest (100%)
              None Found At Time Of Publication
            • Author Conflicts Of Interest (0%)
              None Found At Time Of Publication

            75%

            • Unique Points
              • Searching for Taylor Swift on X showed an error message after pornographic, AI-generated images of the singer were circulated across social media last week.
              • The fake images of Swift show her in sexually suggestive and explicit positions. They were viewed tens of millions of times before being removed from social platforms.
            • Accuracy
              No Contradictions at Time Of Publication
            • Deception (100%)
              None Found At Time Of Publication
            • Fallacies (85%)
              The article contains an appeal to authority fallacy when it states that X's policies ban the sharing of synthetic, manipulated or out-of-context media. This is not a clear statement and could be interpreted in different ways.
              • Bias (100%)
                None Found At Time Of Publication
              • Site Conflicts Of Interest (0%)
                There are multiple examples of conflicts of interest in this article. The author has a financial stake in the company that owns X and is therefore likely to report on it favorably.
                • The article mentions that Taylor Swift's search results on X have come up empty after explicit AI pictures went viral, but does not provide any details about how these images were created or who was responsible for them. This suggests a potential conflict of interest between the company that owns X and those involved in creating the deepfake photography.
                • Author Conflicts Of Interest (0%)
                  The author has a conflict of interest on the topic of AI-generated images and deepfake photography as they are both related to X (formerly known as Twitter) which is owned by Elon Musk. The article also mentions Photoshop which could be used for creating deepfakes.
                  • The author writes,