Taylor Swift becomes latest victim of deepfake pornography, calls for legislation to criminalize the practice grow

New York, United States United States of America
Images created using artificial intelligence have been distributed across social media platforms and seen by millions
Renewed calls for legislation to criminalize the practice are growing
Taylor Swift has become the latest victim of deepfake pornography
Taylor Swift becomes latest victim of deepfake pornography, calls for legislation to criminalize the practice grow

Taylor Swift has become the latest victim of deepfake pornography, with images created using artificial intelligence being distributed across social media platforms. The fake images have been seen by millions and are causing renewed calls for legislation to criminalize the practice. While some politicians have spoken out against this issue, there is a growing push for federal law to be passed that would make it illegal to share deepfake pornography without consent.



Confidence

100%

No Doubts Found At Time Of Publication

Sources

79%

  • Unique Points
    • Deepfake pornographic images of Taylor Swift have been distributed across social media and seen by millions this week
    • Yvette D Clarke, a Democratic congresswoman for New York, has written about the issue of deepfakes being used without consent and called for action to be taken
    • Joseph Morelle unveiled the proposed Preventing Deepfakes of Intimate Images Act in May 2023 which would make it illegal to share deepfake pornography without consent
    • Scarlett Johansson has spoken about widespread fake pornography featuring her likeness and the difficulty in protecting oneself from it
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (80%)
    The article is deceptive in several ways. Firstly, it implies that deepfake pornography of Taylor Swift has been distributed across social media and seen by millions this week when the actual number of views for one image on X was only 47 million before it was removed. Secondly, the article quotes Yvette D Clarke as saying what happened to Taylor Swift is nothing new which implies that deepfakes are a common occurrence without providing any evidence or statistics to support this claim. Thirdly, the article mentions some individual US states have their own legislation against deepfakes but does not provide information on how many states have such laws and if they are effective in combating the issue. Fourthly, it quotes Tom Kean Jr as saying that AI technology is advancing faster than necessary guardrails without providing any evidence or statistics to support this claim.
    • The article implies that deepfake pornography of Taylor Swift has been distributed across social media and seen by millions this week when the actual number of views for one image on X was only 47 million before it was removed. This is a lie by omission as the article does not provide accurate information about the spread of these images.
    • The article quotes Yvette D Clarke as saying what happened to Taylor Swift is nothing new which implies that deepfakes are a common occurrence without providing any evidence or statistics to support this claim. This is a lie by omission as the article does not provide accurate information about the prevalence of deepfakes.
    • The article quotes Tom Kean Jr as saying that AI technology is advancing faster than necessary guardrails without providing any evidence or statistics to support this claim. This is an example of sensationalism and science fiction.
    • The article mentions some individual US states have their own legislation against deepfakes but does not provide information on how many states have such laws and if they are effective in combating the issue. This is an example of selective reporting as it only reports details that support the author's position without providing a complete picture.
  • Fallacies (80%)
    The article contains several examples of logical fallacies. The author uses an appeal to authority by citing the opinions and actions of politicians such as Yvette D Clarke and Joseph Morelle without providing any evidence or context for their positions on deepfake pornography legislation. Additionally, the author presents a dichotomous depiction of women being disproportionately impacted by deepfakes while ignoring other groups that may also be affected. The article also contains inflammatory rhetoric such as
    • Bias (100%)
      None Found At Time Of Publication
    • Site Conflicts Of Interest (50%)
      The article discusses the issue of deepfake pornography and its impact on individuals such as Taylor Swift. The author has a financial tie to Drake, who is also mentioned in the article.
      • Author Conflicts Of Interest (50%)
        The author has a conflict of interest on the topic of deepfake pornography as they have written about it before and may have personal biases or opinions.
        • .
          • “What's happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes without consent.

          76%

          • Unique Points
            • Taylor Swift is a singer and actress
            • The article discusses the spread of fake, sexually explicit images of Taylor Swift on social media platforms
            • Rep. Yvette Clarke (D-N.Y.) has called for action to address this issue
            • Advancements in technology like AI have made it easier for bad actors to create such seemingly realistic images
          • Accuracy
            No Contradictions at Time Of Publication
          • Deception (100%)
            None Found At Time Of Publication
          • Fallacies (80%)
            The article contains an appeal to authority fallacy by citing the opinions of politicians and experts without providing evidence or context. The author also uses inflammatory rhetoric when describing the issue as 'nothing new' and a problem that affects women across the country.
            • ]What’s happened to Taylor Swift is nothing new,” Clarke wrote on social media, as she requested action from politicians as well as the singer’s fans. Yvette D. Clarke (@RepYvetteClarke) January 25, 2024
            • The Democrat noted that advancements in technology like AI have made it easier for bad actors to create such seemingly realistic images, sometimes known as deepfakes.
            • Under the ownership of Elon Musk, however, the platform has made deep cuts to its content moderation team. The company is also under investigation in the European Union over whether it violated the 27-nation bloc’s Digital Services Act, which covers content moderation.
            • A handful of states have passed bills addressing deepfakes, but Congress has so far failed to legislate on the issue.
          • Bias (85%)
            The article contains a statement that implies the author is biased towards finding a solution to an issue affecting women across the country. The author also uses language that dehumanizes Taylor Swift by implying she has been targeted without her consent and that deepfakes are easier and cheaper with advancements in AI technology.
            • This is an issue both sides of the aisle should be able to come together to solve.
              • What's happened to Taylor Swift is nothing new, Clarke wrote on social media
              • Site Conflicts Of Interest (0%)
                Marita Vlachou has conflicts of interest on the topics of Taylor Swift and artificial intelligence (AI).
                • Author Conflicts Of Interest (50%)
                  The author has a conflict of interest on the topics of Taylor Swift and deepfakes as they are both popular culture figures. The article also discusses artificial intelligence (AI) which could be used to create such images.

                  72%

                  • Unique Points
                    • AI companies and Congress have the responsibility to tighten deepfake rules.
                    • Kara Frederick is a guest on Fox News Channel's Your World with Neil Cavuto.
                  • Accuracy
                    • Deepfakes are overwhelmingly targeted at women in a sexually exploitative way.
                  • Deception (100%)
                    None Found At Time Of Publication
                  • Fallacies (85%)
                    The article contains several fallacies. Firstly, the author uses an appeal to authority by stating that AI companies and Congress have a responsibility to tighten deepfake rules without providing any evidence or reasoning for this claim. Secondly, the article contains inflammatory rhetoric when it describes deepfakes as 'a dangerous threat' without providing any context or evidence for this claim. Lastly, the author uses a dichotomous depiction by stating that either AI companies and Congress take action to tighten deepfake rules or they are complicit in allowing the spread of fake news.
                    • The article states that 'AI companies and Congress have a responsibility to tighten deepfake rules' without providing any evidence or reasoning for this claim. This is an example of an appeal to authority fallacy.
                    • The author describes deepfakes as 'a dangerous threat' without providing any context or evidence for this claim. This is an example of inflammatory rhetoric.
                    • The article uses a dichotomous depiction by stating that either AI companies and Congress take action to tighten deepfake rules or they are complicit in allowing the spread of fake news. This is an example of a false dilemma fallacy.
                  • Bias (0%)
                    The article is biased towards the idea that AI companies and Congress have a responsibility to tighten deepfake rules. The author uses language like 'responsibility' and 'tighten' which implies that there should be regulations put in place.
                    • [
                      • ]
                      • Site Conflicts Of Interest (100%)
                        None Found At Time Of Publication
                      • Author Conflicts Of Interest (0%)
                        None Found At Time Of Publication

                      68%

                      • Unique Points
                        None Found At Time Of Publication
                      • Accuracy
                        • Are we on the brink of a Civil War?
                        • Deepfake pornographic images of Taylor Swift have been distributed across social media and seen by millions this week
                        • Yvette D Clarke has written about the issue of deepfakes being used without consent and called for action to be taken
                      • Deception (0%)
                        The article contains multiple examples of deceptive practices. The author uses sensationalism by stating that graphic images of Taylor Swift are sweeping the internet and implies that they are a bad thing without providing any evidence to support this claim. Additionally, the author selectively reports on Charlie Kirk's comments about black pilots while ignoring other instances where he has made similar statements in the past.
                        • The article states that graphic images of Taylor Swift are sweeping the internet and implies that they are a bad thing without providing any evidence to support this claim. This is an example of sensationalism.
                        • The author selectively reports on Charlie Kirk's comments about black pilots while ignoring other instances where he has made similar statements in the past. This is an example of selective reporting.
                      • Fallacies (85%)
                        The article contains several fallacies. Firstly, the author uses an appeal to authority by stating that they are breaking down what is going on at the border without providing any evidence or sources for their claims. Secondly, the author makes a false dilemma by suggesting that either we are on the brink of a Civil War or not, implying that there are only two options when in reality there may be more. Thirdly, the author uses inflammatory rhetoric by stating that graphic images of Taylor Swift are sweeping the internet and calling them explicit without providing any context or justification for their classification as such. Lastly, the author makes an informal fallacy by suggesting that Charlie Kirk's comments about black pilots were under fire when in reality there is no evidence provided to support this claim.
                        • Are we on the brink of a Civil War?
                        • Today, I am breaking down what is going on at the border and what it means for our future. Plus, graphic images of Taylor Swift are sweeping the internet showing her in a series of explicit acts and I will tell you why I think it is a good thing.
                        • Charlie Kirk under fire for comments he made about black pilots.
                      • Bias (100%)
                        None Found At Time Of Publication
                      • Site Conflicts Of Interest (100%)
                        None Found At Time Of Publication
                      • Author Conflicts Of Interest (100%)
                        None Found At Time Of Publication

                      67%

                      • Unique Points
                        None Found At Time Of Publication
                      • Accuracy
                        • Taylor Swift's explicit, AI-generated images spread quickly on social media
                        • These photos show the singer in sexually suggestive and explicit positions
                        • The incident comes as concerns are growing about how misleading AI-generated images could be used to head up disinformation efforts and disrupt the vote
                      • Deception (50%)
                        The article is deceptive in several ways. Firstly, it states that the fake images of Taylor Swift were predominantly circulating on social media site X (previously known as Twitter). However, this statement is false because the photos were viewed tens of millions of times before being removed from all social platforms. Secondly, the article claims that nothing on the internet is truly gone forever and these images will undoubtedly continue to be shared on other less regulated channels. This statement is also false because it implies that there are no regulations in place to prevent such content from spreading further, which contradicts what the article states later about social media companies' policies banning synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm. Lastly, the article quotes Ben Decker stating that X has largely gutted its content moderation team and relies on automated systems and user reporting. However, this statement is false because CNN reports in a separate paragraph that Meta made cuts to its teams that tackle disinformation and coordinated troll campaigns on their platforms. This contradicts the information provided by Decker.
                        • The fake images of Taylor Swift were predominantly circulating on social media site X (previously known as Twitter).
                        • Nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other less regulated channels.
                      • Fallacies (85%)
                        The article contains several examples of informal fallacies. The author uses an appeal to authority by citing the opinions of experts in the field without providing any evidence or context for their claims. Additionally, there are multiple instances where the author presents a statement as fact without providing any supporting evidence or sources.
                        • The incident comes as concerns are growing about how misleading AI-generated images and videos could be used to head up disinformation efforts and ultimately disrupt the vote.
                      • Bias (100%)
                        None Found At Time Of Publication
                      • Site Conflicts Of Interest (50%)
                        Samantha Murphy Kelly has a conflict of interest on the topic of AI-generated Taylor Swift images as she is an employee of CNN which owns and operates Instagram. This could compromise her ability to report objectively and impartially on this topic.
                        • Author Conflicts Of Interest (50%)
                          Samantha Murphy Kelly has a conflict of interest on the topic of AI-generated Taylor Swift images as she is reporting for CNN which owns and operates Instagram. This platform hosts user generated content that includes AI-generated images.