Microsoft Copilot Designer Criticized for Generating Harmful Content by Microsoft Employee

Seattle, Washington United States of America
Jones accused Microsoft of marketing the tool as safe, including for children, despite what he says are known risks.
Microsoft Copilot Designer generates potentially offensive or inappropriate images
The tool frequently produces harmful content despite a benign request from the user and generates sexualized images of women in violent tableaus.
Microsoft Copilot Designer Criticized for Generating Harmful Content by Microsoft Employee

Microsoft's AI tool, Copilot Designer, has been criticized by a Microsoft employee for generating potentially offensive or inappropriate images. The engineer claims that the product frequently produces harmful content despite a benign request from the user and generates sexualized images of women in violent tableaus. Jones also accused Microsoft of marketing the tool as safe, including for children, despite what he says are known risks. He has brought his concerns to US regulators and Microsoft's board of directors.



Confidence

80%

Doubts
  • It is not clear if there are any other instances where Copilot Designer generates harmful content.

Sources

76%

  • Unique Points
    • Copilot Designer generated images that ran afoul of Microsoft's responsible AI principles
    • Jones saw that the tool generated images depicting demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, underage drinking and drug use.
  • Accuracy
    • Copilot Designer generated demons and monsters alongside terminology related to abortion rights.
    • Copilot Designer created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel's flag.
  • Deception (90%)
    This article is highly deceptive as it presents a product that generates violent and sexual images as safe for all audiences. The author of the article has tested the product extensively and found numerous examples of inappropriate content generated by Copilot Designer. Microsoft's legal department told Jones to remove his post immediately, which shows that they are aware of the issue but refuse to take action on it.
    • Copilot Designer generates images that run far afoul of Microsoft’s responsible AI principles.
  • Fallacies (85%)
    The article discusses the concerns of a Microsoft engineer who discovered that Copilot Designer, an AI image generator developed by Microsoft and powered by OpenAI's technology, generates violent and sexual images. The engineer has been testing the product for vulnerabilities since December 2023 and found that it violates several of Microsoft's responsible AI principles. He reported his findings to the company but was not satisfied with their response, so he took matters into his own hands by sending letters to Federal Trade Commission Chair Lina Khan and Microsoft's board of directors. The engineer also shared examples of disturbing images generated by Copilot Designer on social media platforms.
    • Copilot Designer generates violent and sexual images that run far afoul of Microsoft's responsible AI principles.
  • Bias (85%)
    The article reports on a Microsoft engineer's concerns about the company's AI image generator, Copilot Designer. The tool has generated images that run afoul of Microsoft's responsible AI principles and depict demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, underage drinking and drug use. These scenes were created in the past three months by CNBC using the Copilot tool. The engineer has internally reported his findings but Microsoft refused to take the product off the market. He also sent letters to Federal Trade Commission Chair Lina Khan and Microsoft's board of directors, requesting that they investigate certain decisions by legal department and management, as well as begin an independent review of responsible AI incident reporting processes.
    • Depict demons and monsters alongside terminology related to abortion rights
      • Sexualized images of women in violent tableaus, underage drinking and drug use
        • Teenagers with assault rifles
          • The tool generated images that run afoul of Microsoft's responsible AI principles
          • Site Conflicts Of Interest (50%)
            None Found At Time Of Publication
          • Author Conflicts Of Interest (50%)
            None Found At Time Of Publication

          69%

          • Unique Points
            • Copilot Designer generated demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, underage drinking and drug use.
            • Copilot Designer reportedly generated Disney characters such as Elsa from Frozen in scenes at the Gaza Strip in front of wrecked buildings and 'Free Gaza' signs.
            • Jones has been trying to warn Microsoft about DALLE-3, the model used by Copilot Designer since December.
            • Microsoft facilitated meetings with product leadership and Office of Responsible AI to review Jones’s reports.
          • Accuracy
            • Copilot Designer generated demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus,
          • Deception (50%)
            None Found At Time Of Publication
          • Fallacies (85%)
            The article contains several examples of fallacies. The author uses an appeal to authority by stating that Microsoft is refusing to take down Copilot Designer despite repeated warnings from the engineer who wrote a letter to the FTC. This statement implies that Shane Jones's warning should be taken as fact without any evidence presented, which is not logical or fair. Additionally, there are several examples of inflammatory rhetoric used in the article such as
            • The tool generated demons and monsters alongside terminology related to abortion rights,
          • Bias (80%)
            The article reports on the safety concerns raised by a Microsoft engineer about Copilot Designer, an AI image generator that can produce harmful and inappropriate images. The author cites examples of such images generated by the tool, such as demons, monsters, sexualized content, violence scenes, political and social issues. The author also mentions that the engineer tried to warn Microsoft about the model used by Copilot Designer since December but was ignored and contacted by Microsoft's legal team to remove his post on LinkedIn. The article does not provide any counterarguments or alternative perspectives from Microsoft or other sources, nor does it question the validity of the engineer's claims. Therefore, the article can be considered as highly biased towards presenting Copilot Designer as a dangerous and irresponsible tool that poses risks to users and society.
            • Copilot Designer generated explicit images of Taylor Swift that spread rapidly across X. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=In%20January,%E2%80%A6,spread%20rapidly%20across %X.)
              • Copilot Designer generated images of demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=When%20testing,abortion%20rights,%E2%80%A6
                • Copilot Designer generated images of Disney characters in scenes at the Gaza Strip with wrecked buildings, free Gaza signs, Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel's flag. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=The%20Verge%20was,Israel's flag.)
                  • Copilot Designer generated images of Elsa from Frozen in scenes at the Gaza Strip in front of wrecked buildings and 'free Gaza' signs. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=It%20also,holding%20a shield,%E2%80%A6
                    • Copilot Designer generated images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel's flag. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=Additionally,%E2%80%A6
                      • Copilot Designer generated images of Taylor Swift in scenes with racially diverse Nazis and other historically inaccurate images. (https://www.theverge.com/2024/3/6/24092191/microsoft-ai-engineer-copilot-designer-ftc-safety-concerns#:~:text=In%20January,%E2%80%A6,diverse%20Nazis)
                      • Site Conflicts Of Interest (50%)
                        None Found At Time Of Publication
                      • Author Conflicts Of Interest (50%)
                        None Found At Time Of Publication

                      68%

                      • Unique Points
                        • Copilot Designer generates images that add harmful content despite a benign request from the user. For example, when using just the prompt 'car accident', it has a tendency to randomly include an inappropriate sexually objectified image of a woman in some pictures it creates.
                        • Other harmful content involves violence as well as political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion.
                        • Jones repeatedly asked the company to take Copilot Designer off the market until it is safer or at least change its age rating on smartphones to make clear it is for mature audiences.
                        • Microsoft's legal team demanded that Jones delete his letter posted on LinkedIn in December, which he reluctantly did.
                        • Jones has brought his concerns to the U.S Senate Commerce Committee and state attorney general in Washington where Microsoft is headquartered.
                      • Accuracy
                        • Copilot Designer created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel's flag.
                      • Deception (50%)
                        The article is somewhat deceiving because it does not disclose the sources of the harmful or offensive images generated by Microsoft's AI image-generator tool. The author also uses emotional language such as 'alarm', 'harmful content', and 'inappropriate' to sensationalize the issue without providing any evidence or context for these claims. Additionally, the article does not mention that OpenAI has different safeguards than Microsoft's Copilot Designer, which could imply a bias against Microsoft. The author also omits any information about how users can control or filter the generated images to avoid unwanted results.
                        • The author also states that 'other harmful content involves violence as well as “political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few,” without providing any specific examples or sources. This is another example of deception by omission because the reader cannot assess the validity or severity of these claims.
                        • The author also uses emotional language such as 'alarm', 'harmful content', and 'inappropriate' to sensationalize the issue without providing any evidence or context for these claims. This is an example of deception by rhetoric because the author manipulates the reader's emotions to influence their perception of Microsoft's AI image-generator tool.
                        • The author claims that 'when using just the prompt, ‘car accident’, Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.' This is an example of deception by omission because the author does not provide any evidence or examples of these images. The reader cannot verify this claim without using Microsoft's AI image-generator tool themselves.
                        • The article also does not mention that OpenAI has different safeguards than Microsoft's Copilot Designer, which could imply a bias against Microsoft. This is an example of deception by implication because the author suggests a negative comparison between the two companies without explicitly stating it.
                      • Fallacies (75%)
                        The article contains several examples of informal fallacies. The author uses an appeal to authority by citing the opinions of Microsoft and U.S regulators without providing any evidence or reasoning for their positions.
                        • > A Microsoft engineer is sounding alarms about offensive and harmful imagery he says is too easily made by the company's artificial intelligence image-generator tool, Copilot Designer.
                      • Bias (85%)
                        The article contains examples of religious bias and monetary bias. The author uses language that dehumanizes those who hold certain beliefs or values, such as when he describes the AI image-generator tool as having a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.
                        • The Federal Trade Commission confirmed it received his letter Wednesday but declined further comment. Microsoft said it is committed to addressing employee concerns about company policies and that it appreciates Jones’ effort in studying and testing our latest technology to further enhance its safety.
                        • Site Conflicts Of Interest (50%)
                          Microsoft engineer Shane Jones raised concerns about the potential misuse of Copilot Designer and DALL-E 3 to generate deepfakes or other forms of disinformation. He also expressed concern that Microsoft's AI image-generator tool could be used for nefarious purposes by bad actors.
                          • Jones said he was particularly concerned about the possibility that Microsoft's AI image-generator tool could be used to create fake videos, images or audio clips that could be used to spread misinformation or manipulate public opinion.
                            • Shane Jones, a Microsoft engineer who worked on the development of Copilot Designer and DALL-E 3, raised concerns about their potential misuse to generate deepfakes or other forms of disinformation. He also expressed concern that these tools could be used for nefarious purposes by bad actors.
                            • Author Conflicts Of Interest (0%)
                              None Found At Time Of Publication

                            79%

                            • Unique Points
                              • Copilot Designer generates potentially offensive or inappropriate images
                              • Jones criticized Microsoft for marketing the tool as safe despite known risks
                              • One of the most concerning risks with Copilot Designer is when it generates harmful content despite a benign request from the user.
                              • <https://www.cnbc.com/2024/03-06/microsoft-ai-engineer-says-copilot designer creates disturbing images.html>
                            • Accuracy
                              • Copilot Designer generates images that add harmful content despite a benign request from the user. For example, when using just the prompt 'car accident', it has a tendency to randomly include an inappropriate sexually objectified image of a woman in some pictures it creates.
                              • Other harmful content involves violence as well as political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion.
                            • Deception (100%)
                              None Found At Time Of Publication
                            • Fallacies (85%)
                              The article contains several examples of informal fallacies. The author uses an appeal to authority by citing the US Federal Trade Commission (FTC) and Microsoft employee Shane Jones as sources for their claims. They also use inflammatory rhetoric when describing the potential harm caused by AI-generated images, including sexualized images of women and political misinformation. Additionally, they make a false dilemma by suggesting that either Copilot Designer should be removed from public use or only marketed to adults.
                              • The author uses an appeal to authority when citing the US Federal Trade Commission (FTC) as a source for their claims.
                            • Bias (80%)
                              The author of the article is biased towards removing Microsoft's AI tool Copilot Designer from public use until better safeguards can be put in place. The author also criticizes Microsoft for marketing the tool as safe and raises concerns about potential harm caused by offensive or misleading images generated by AI image generators. Additionally, the author calls on Microsoft to conduct investigations into their decision-making process regarding AI products with significant public safety risks.
                              • Jones said, in response to the prompt “car accident,” Copilot Designer “has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.
                                • Microsoft competitor Google also came under fire last month after its AI chatbot Gemini produced historically inaccurate images that largely showed people of color in place of White people
                                  • Shane Jones claimed that Copilot Designer has systemic issues that cause it to frequently produce potentially offensive or inappropriate images
                                  • Site Conflicts Of Interest (50%)
                                    None Found At Time Of Publication
                                  • Author Conflicts Of Interest (50%)
                                    None Found At Time Of Publication