Scarlett Johansson Threatens Legal Action Against OpenAI for Digital Voice Replication

San Francisco, California United States of America
OpenAI has faced criticism for using creators' data without consent and building an artificial general intelligence (AGI) that could impact various aspects of human life.
OpenAI initially denied wrongdoing but later took down Sky's voice out of respect for Johansson.
SAG-AFTRA welcomed OpenAI's decision and looks forward to working with them on protecting performers' voices and likenesses.
Scarlett Johansson threatened legal action against OpenAI for digital voice replication of her voice in their new AI system named 'Sky'.
This incident raised concerns about the use of digital replication technologies and their impact on performers' livelihoods.
Scarlett Johansson Threatens Legal Action Against OpenAI for Digital Voice Replication

In a recent turn of events, Hollywood actor Scarlett Johansson found herself at the center of a controversy involving OpenAI and its latest digital assistant, named 'Sky'. According to reports, Johansson had previously declined an invitation from OpenAI to license her voice for their new AI system. However, it was later discovered that Sky's voice bore a striking resemblance to Johansson's own. This revelation led the actress to threaten legal action against OpenAI.

OpenAI CEO Sam Altman initially denied any wrongdoing and claimed that they had already cast a different voice actor for Sky before reaching out to Johansson. However, in response to the public backlash, Altman announced that OpenAI would be taking down Sky's voice 'out of respect' for Johansson.

This incident has raised concerns about the use of digital replication technologies and their potential impact on performers' livelihoods. The SAG-AFTRA union, which represents thousands of Hollywood actors, welcomed OpenAI's decision to pause using Sky and looks forward to working with them on protecting performers' voices and likenesses.

The rapid rise of voice imitation technologies has caused anxiety among many in the entertainment industry. Last summer, SAG-AFTRA went on strike in part due to concerns over the future of generative AI in the entertainment industry.

This is not the first time OpenAI has been involved in a lawsuit regarding the use of creators' data without consent. The company has also faced criticism for its end goal of building an artificial general intelligence (AGI) that could significantly impact various aspects of human life, including jobs, science, and medicine.

As this story continues to unfold, it remains to be seen how the industry will respond and what measures will be taken to protect performers' rights in the age of AI.



Confidence

91%

Doubts
  • Is it confirmed that OpenAI used Johansson's voice recordings without her consent?
  • What specific laws were violated by OpenAI in this case?

Sources

84%

  • Unique Points
    • Scarlett Johansson threatened legal action against OpenAI for allegedly copying and imitating her voice after she declined to license it.
    • OpenAI asked Johansson to be one of the voices called 'Sky' for its newest AI system, but she refused.
    • There is a growing rift between creators and companies over the use of AI to imitate actors' likenesses without consent or compensation.
  • Accuracy
    • OpenAI asked Johansson to be one of the voices called ‘Sky’ for its newest AI system, but she refused.
    • OpenAI debuted Sky two days after rejecting Johansson’s request to reconsider voice licensing.
    • Sam Altman, OpenAI CEO, claimed the company had already cast a different voice actor before reaching out to Johansson and would take down Sky’s voice ‘out of respect’ for her.
  • Deception (50%)
    The article contains selective reporting and emotional manipulation. The author focuses on the legal threat made by Scarlett Johansson against OpenAI for allegedly copying her voice without permission. However, the author fails to mention that Johansson declined an offer from OpenAI to be one of the voices for their newest AI system and that she only threatened legal action after they went ahead and created a voice similar to hers despite her refusal. The author also uses emotional language such as 'shocked', 'angered', and 'disbelief' to manipulate the reader's emotions towards OpenAI. Additionally, the article implies that OpenAI is intentionally trying to mimic Johansson's voice for nefarious purposes, but it does not provide any evidence of this beyond her statement.
    • Johansson said that the similarity was intentional...
    • The actress declined, though she said that the similarity was intentional...
    • She wrote in a statement...
  • Fallacies (90%)
    The article contains an appeal to authority when it mentions the legal threats and lawsuits against OpenAI and LOVO. The author also makes a dichotomous depiction by presenting the actions of OpenAI as a 'blunder' and a 'growing rift' between creators and tech firms, implying that there are only two sides to this issue. However, no explicit fallacies were found in the direct quotes from the article.
    • ]OpenAI dropped [Sky],[] A lawsuit from Bette Midler against Ford over a series of commercials called [The Yuppie Campaign] in which the company used an impersonator of the singer to imitate her voice may be instructive.[
    • Legal threats and lawsuits against OpenAI and LOVO.
  • Bias (95%)
    The author expresses a clear bias towards the actors and their rights to control the use of their voices in AI systems. He uses language that depicts OpenAI as acting unethically and in bad faith by attempting to mimic Johansson's voice without her consent. The author also implies that OpenAI is targeting vulnerable individuals, such as new actors or screenwriters, who may not have the resources to fight back against the company.
    • OpenAI faced an uphill battle in court if it was met with litigation, according to legal experts consulted by THR.
      • The legal threat follows the filing of a proposed class action in New York federal court against Berkeley-based AI startup LOVO accusing the company of stealing and profiting off of the voices of actors, as well as those of A-list talent such as Johansson, Ariana Grande and Conan O’Brien.
        • There’s growing mistrust of AI companies, with many believing that the Altman-led firm isn’t operating in good faith.
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        76%

        • Unique Points
          • OpenAI approached Scarlett Johansson for voice licensing but allegedly used her voice without permission in their digital assistant, Sky.
          • Sam Altman, OpenAI CEO, claimed the company had already cast a different voice actor before reaching out to Johansson and would take down Sky's voice 'out of respect' for her.
          • OpenAI has been involved in multiple lawsuits over the use of creators' or copyright owners' data without consent.
          • The end goal of OpenAI is to build an artificial general intelligence (AGI) that could significantly impact various aspects of human life, including jobs, science, and medicine.
        • Accuracy
          • ,OpenAI debuted Sky two days after rejecting Johansson’s request to reconsider voice licensing.
          • ,Sam Altman, OpenAI CEO, claimed the company had already cast a different voice actor before reaching out to Johansson and would take down Sky’s voice ‘out of respect’ for her.
          • ,OpenAI has been involved in multiple lawsuits over the use of creators or copyright owners data without consent or copyright infringement.
          • ,OpenAI employees have expressed the belief that AGI will create tremendous wealth but may not be equitably distributed.
        • Deception (30%)
          The article contains selective reporting and emotional manipulation. The author focuses on the Scarlett Johansson situation as an example of AI's raw deal and ethical concerns, but fails to mention that OpenAI had already cast a different voice actor before reaching out to Johansson. This omission creates a misleading impression about the company's actions. Additionally, the author uses emotive language such as 'shocked', 'angered', and 'disbelief' to manipulate readers' emotions towards OpenAI and Sam Altman.
          • Altman, of course, has testified before Congress, urging lawmakers to regulate the technology while also stressing that ‘the benefits of the tools we have deployed so far vastly outweigh the risks.’
          • The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
          • You can try to fight this, but you can’t stop it. Your best bet is to get on board.
        • Fallacies (85%)
          The author makes an appeal to authority when he states 'Altman, of course, has testified before Congress' and 'Altman made it clear that we’re no longer in that world.' This implies that because Altman has testified before Congress and holds a certain vision for the future of AI, his opinions are valid and should be trusted. Additionally, the author uses inflammatory rhetoric when describing OpenAI's actions as 'blowing past ethical concerns' and 'operating with impunity.' This language is intended to evoke strong emotions from readers rather than presenting objective facts.
          • Altman made it clear that we’re no longer in that world.
          • This is a messy situation for OpenAI, complicated by Altman’s own social-media posts. On the day that OpenAI released ChatGPT’s assistant, Altman posted a cheeky, one-word statement on X: ‘Her’–a reference to the 2013 film of the same name, in which Johansson is the voice of an AI assistant that a man falls in love with.
          • The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
        • Bias (80%)
          The author expresses a clear bias towards OpenAI and their goals, implying that the ends justify the means. He also uses language that depicts those opposed to OpenAI's actions as being unable to stop them or being 'rough'.
          • Hubris and entitlement are inherent in the development of any transformative technology. A small group of people needs to feel confident enough in its vision to bring it into the world and ask the rest of us to adapt.
            • It follows that the company would plow ahead, consent be damned, simply because it might believe the stakes are too high to pivot or wait.
              • You can try to fight this, but you can’t stop it. Your best bet is to get on board.
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (100%)
                None Found At Time Of Publication

              95%

              • Unique Points
                • OpenAI found a sound-alike for Scarlett Johansson’s voice and used it in ChatGPT without her consent.
                • OpenAI unveiled a chattier version of ChatGPT featuring the Johansson sound-alike last week.
              • Accuracy
                • OpenAI found a sound-alike for Scarlett Johansson’s voice and used it in ChatGPT.
              • Deception (100%)
                None Found At Time Of Publication
              • Fallacies (100%)
                None Found At Time Of Publication
              • Bias (100%)
                None Found At Time Of Publication
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (100%)
                None Found At Time Of Publication

              82%

              • Unique Points
                • Scarlett Johansson expressed shock, anger, and disbelief over a new artificial voice named ‘Sky’ from OpenAI that closely resembled her own voice.
                • ,OpenAI denied creating a copy of Johansson’s voice but paused its use.
                • ,A bipartisan bill called NO FAKES Act was drafted last year to allow people to sue creators and distributors of unauthorized AI-generated digital replicas.
                • ,Sen. Chris Coons (D-Del.) plans to introduce the NO FAKES Act in the Senate next month.
                • ,Federal Trade Commission finalized a rule banning impersonation of government and business and proposed new protections against AI impersonation of individuals, but declined to comment on Johansson’s allegations.
                • ,Robert Weissman, president of Public Citizen, is skeptical that Johansson’s experience would change the landscape on tech regulation.
                • ,SAG-AFTRA union representing entertainers sponsors a California bill limiting the use of digital likenesses and supports the federal NO FAKES bill.
              • Accuracy
                • OpenAI denied creating a copy of Johansson’s voice but paused its use.
                • OpenAI approached Scarlett Johansson for voice licensing but allegedly used her voice without permission in their digital assistant, Sky.
                • Sam Altman claimed the company had already cast a different voice actor before reaching out to Johansson and would take down Sky’s voice ‘out of respect’ for her.
                • OpenAI contacted Johansson in September and offered her a job to voice the GPT-4o system ‘Sky’. She declined.
                • Two days before the release of ChatGPT 4.0, OpenAI contacted Johansson’s agent asking her to reconsider. The system was already out by then.
                • Altman stated that the voice of ‘Sky’ was not meant to sound like Johansson and was cast before he contacted her last fall.
              • Deception (30%)
                The article contains selective reporting as it only reports details that support the author's position about the NO FAKES Act and Johansson's case. It does not provide any counterarguments or mention any potential drawbacks of the bill. The author also uses emotional manipulation by describing Johansson's experience as a 'frankly disturbing threat'.
                • Not everyone who has been cloned by AI is an actor – and not everyone has a problem with it. Psychologist Martin Seligman saw benefit when a former student turned his works into a chatbot that sounds like him.
                • The Johansson case underscores the ‘frankly disturbing threat’ of unauthorized AI ripoffs, Sen. Chris Coons (D-Del.) wrote to POLITICO.
              • Fallacies (90%)
                The article does not contain any formal logical fallacies. However, it does present a dichotomous depiction of OpenAI and the AI industry as a whole by framing them as reckless innovators who disregard individual rights. This is achieved through the portrayal of the Johansson case as emblematic of wider concerns about AI's capacity to replicate actual people without their consent. Additionally, there are inflammatory rhetorical elements such as referring to 'the perennially stuck approach to Big Tech' and describing the situation as a 'frankly disturbing threat'.
                • The article frames OpenAI and the AI industry as reckless innovators who disregard individual rights.
                • The Johansson case is portrayed as emblematic of wider concerns about AI's capacity to replicate actual people without their consent.
                • The situation is described as a 'frankly disturbing threat'.
              • Bias (100%)
                None Found At Time Of Publication
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (0%)
                None Found At Time Of Publication

              99%

              • Unique Points
                • OpenAI agreed to take down the 'Sky' voice after Johansson's legal counsel sent letters requesting details on how they created it.
                • SAG-AFTRA welcomed OpenAI's decision to pause using 'Sky' and looks forward to working with them on protecting performers' voices and likenesses.
                • The rapid rise of voice imitation technologies has caused anxiety about political disinformation and performers losing their livelihoods.
                • SAG-AFTRA went on strike last summer in part due to concerns over the future of generative AI in the entertainment industry.
              • Accuracy
                • OpenAI contacted Johansson in September and offered her a job to voice the GPT-4o system ‘Sky’. She declined.
                • OpenAI agreed to take down the ‘Sky’ voice after Johansson’s legal counsel sent letters requesting details on how they created it.
              • Deception (100%)
                None Found At Time Of Publication
              • Fallacies (100%)
                None Found At Time Of Publication
              • Bias (100%)
                None Found At Time Of Publication
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (0%)
                None Found At Time Of Publication