Scarlett Johansson Accuses OpenAI of Using Her Voice for ChatGPT's 'Sky' Without Permission: Legal Consequences and Ethical Debates

San Francisco, California United States of America
Actress Scarlett Johansson accuses OpenAI of using her voice for ChatGPT's 'Sky' without permission.
Concerns about intellectual property rights, particularly in relation to likeness and right to publicity laws.
Johansson and her agents were shocked when they discovered the new AI assistant sounded similar to her voice.
Legal consequences for OpenAI possible.
OpenAI issued a casting call last year for a secret project to give ChatGPT a human voice.
Scarlett Johansson Accuses OpenAI of Using Her Voice for ChatGPT's 'Sky' Without Permission: Legal Consequences and Ethical Debates

In a recent turn of events, actress Scarlett Johansson has accused OpenAI of using her voice without permission for their new AI assistant tool, ChatGPT's 'Sky.' The incident has sparked controversy and potential legal consequences for OpenAI. According to multiple sources and documents shared by OpenAI in response to questions from various news outlets, the company had issued a casting call last year for a secret project to give ChatGPT a human voice. However, they did not request or intend to clone Johansson's voice for the project.

Despite this clarification, Johansson and her agents were shocked and angered when they discovered that the new AI assistant sounded strikingly similar to her voice. OpenAI had released the updated model before she could decline a second request to voice their AI assistant.

The incident has raised concerns about intellectual property rights, particularly in relation to likeness and right to publicity laws. Johansson's lawyers have contacted OpenAI regarding the issue, and some experts suggest that the company could face legal consequences for creating a voice that sounds like Johansson without her consent.

This controversy is not an isolated incident in the tech industry. In fact, it echoes some of Silicon Valley's 'bad old days,' where ruthless ambition and arrogance led to profit-driven innovation without regard for consequences or ethical considerations. The lack of official, independent oversight in these companies has contributed to this culture.

As the debate around AI ethics and regulations continues, it is crucial that clear boundaries are established and enforced. This includes legally binding and enforceable rules regarding the use of intellectual property, particularly when it comes to likeness or voices of public figures. Companies like OpenAI must be held accountable for their actions and respect the rights of individuals in their pursuit of technological advancements.



Confidence

86%

Doubts
  • Did OpenAI intentionally clone Johansson's voice for ChatGPT?
  • Is the casting call documentation authentic?
  • What specific laws apply to this situation and their potential consequences?

Sources

88%

  • Unique Points
    • OpenAI was originally created as a non-profit organization that would invest any extra profits back into the business but later formed a profit-oriented arm with a cap on returns for investors.
    • Elon Musk, original co-founder of OpenAI, decided to walk away due to disagreement over the shift from non-profit to profit-driven.
    • Currently, there’s no official, independent oversight of what any of the teams at OpenAI are actually doing.
  • Accuracy
    • The upcoming quarterly refunding update from the US Treasury will provide information on how much bond supply there will be.
  • Deception (70%)
    The article contains editorializing and pontification by the author. The author expresses her opinion that OpenAI's actions were a 'classic illustration of exactly what the creative industries are so worried about - being mimicked and eventually replaced by artificial intelligence.' She also states that OpenAI is 'seeking forgiveness rather than permission as an unofficial business plan.' These statements are not facts but the author's interpretation and opinion. The article also contains selective reporting, as it focuses on the negative aspects of OpenAI's actions without mentioning any potential benefits or context. For example, it mentions that OpenAI denied intentional imitation but does not mention that they have since apologized to Scarlett Johansson and offered her a role in their advisory board. The article also contains emotional manipulation through the use of phrases like 'the real problems' and 'nightmares' when discussing the potential risks of AI.
    • OpenAI den't shape from that mould. Seeking forgiveness rather than permission as an unofficial business plan.
    • These are the real problems.
    • The actor Scarlett Johansson clashed with OpenAI...Ms Johansson claimed both she and her agent had declined for her to be the voice of its new product for ChatGPT - and then when it was unveiled it sounded just like her anyway.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (95%)
    The author expresses a clear bias against the tech industry and specifically OpenAI, implying that they are profit-driven and irresponsible. She uses language such as 'ruthless ambition' and 'breathtaking arrogance' to depict the industry in a negative light. The author also quotes others who share her perspective, further reinforcing the bias.
    • But the tech firms of 2024 are extremely keen to distance themselves from that reputation.
      • Move fast and break things is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg.
        • Those five words came to symbolise Silicon Valley at its worst - a combination of ruthless ambition and a rather breathtaking arrogance - profit-driven innovation without fear of consequence.
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        100%

        • Unique Points
          • OpenAI issued a casting call last year for a secret project to give ChatGPT a human voice.
          • OpenAI did not request a clone of actress Scarlett Johansson’s voice for the project.
        • Accuracy
          No Contradictions at Time Of Publication
        • Deception (100%)
          None Found At Time Of Publication
        • Fallacies (100%)
          None Found At Time Of Publication
        • Bias (100%)
          None Found At Time Of Publication
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        65%

        • Unique Points
          • Scarlett Johansson alleged that OpenAI used her voice without permission for a new AI assistant tool.
          • OpenAI could face legal consequences for making a ChatGPT voice that sounds like Scarlett Johansson, whether intentionally or not.
        • Accuracy
          • Sam Altman, CEO of OpenAI, had gained favor in Washington due to his company’s potential to change the world and his personal modesty.
          • The incident threatens to undo the goodwill OpenAI had built since the advent of ChatGPT.
        • Deception (30%)
          The article contains editorializing and sensationalism. The author uses phrases like 'convincing turn as a Silicon Valley visionary', 'kind of a bad guy', 'welcome, cooperative emissary from Silicon Valley', and 'speedrunning the arc of charismatic tech figureheads before him'. These phrases are not factual statements but rather the author's opinions and interpretations. The author also uses sensational language like 'massive backlash that threatens to undo the goodwill they’ve painstakingly worked to build since the advent of ChatGPT' and 'burning through the goodwill and awe his products inspire'. These phrases are intended to grab the reader's attention and create a sense of urgency, but they do not provide any new information or insights.
          • The author describes Altman as 'kind of a bad guy'
          • The author refers to OpenAI's voice assistant tool as a 'massive backlash that threatens to undo the goodwill they’ve painstakingly worked to build since the advent of ChatGPT'
          • The author calls Sam Altman a 'convincing turn as a Silicon Valley visionary'
        • Fallacies (80%)
          The article contains an appeal to authority fallacy when it states 'There's already a big conversation happening about generative AI, creativity and artists.' This statement implies that because there is a conversation happening about the topic, it must be significant or true. However, this does not provide any evidence or logical reasoning for the claim.
          • ]There's already a big conversation happening about generative AI, creativity and artists.[/
        • Bias (80%)
          The article implies a negative bias towards Sam Altman and OpenAI by using language that depicts them as 'bad guys' and 'speedrunning the arc of charismatic tech figureheads before them'. The author also quotes individuals who express criticism towards Altman and OpenAI, which further reinforces this bias. Additionally, the article mentions a massive backlash against OpenAI due to their use of Johansson's voice without permission, which could be seen as an example of monetary bias.
          • At Slate, Nitish Pahwa found it in keeping with 'the underlying record of how he appears to treat his own employees, run his company, and keep his secrets.'
            • Suddenly, he’s kind of a bad guy.
              • The move represents Altman’s vision of AI as 'the ultimate engine of entitlement.'
                • There’s already a big conversation happening about generative AI, creativity and artists.
                  • This is not the kind of tech figure Washington thought it was inviting in last year.
                    • Will it? Polling released yesterday by the pro-regulation AI Policy Institute found people more worried about AI’s harms than excited about its potential – at least when you remind them what those harms might actually be.
                    • Site Conflicts Of Interest (100%)
                      None Found At Time Of Publication
                    • Author Conflicts Of Interest (0%)
                      None Found At Time Of Publication

                    88%

                    • Unique Points
                      • OpenAI could face legal consequences for making a ChatGPT voice that sounds like Scarlett Johansson, whether intentionally or not.
                      • Scarlett Johansson’s lawyers have contacted OpenAI regarding the similarities between the voice of ChatGPT’s assistant ‘Sky’ and Johansson.
                      • OpenAI previously approached Johansson about voicing their AI assistant but she refused.
                    • Accuracy
                      • ]OpenAI could face legal consequences for making a ChatGPT voice that sounds like Scarlett Johansson[
                      • Scarlett Johansson's lawyers have contacted OpenAI regarding the similarities between the voice of ChatGPT’s assistant ‘Sky’ and Johansson.
                    • Deception (100%)
                      None Found At Time Of Publication
                    • Fallacies (85%)
                      The author makes an appeal to authority by quoting intellectual property lawyers and referencing past celebrity likeness lawsuits. The author also uses inflammatory rhetoric by stating that OpenAI could be in 'real trouble' with Scarlett Johansson.
                      • “There are a few courses of actions she can take, but case law supports her position,”
                      • “If you misappropriate someone s name, likeness, or voice, you could be violating their right to publicity,”
                      • “The Ninth Circuit held that a celebrity with a distinctive voice could recover against someone who used a voice impersonator to create the impression that the celebrity had endorsed the product or was speaking in the advertisement.”
                    • Bias (80%)
                      The author mentions past celebrity likeness lawsuits and specifically mentions Bette Midler and Tom Waits, implying that they won their cases due to the use of similar-sounding voices in commercials. This could be seen as an attempt to sway the reader's opinion towards believing that Johansson has a strong case against OpenAI for creating a ChatGPT voice that sounds like her.
                      • Past celebrity likeness lawsuits have clear implications for AI voice clones.
                        • The wins by Midler and Waits have clear implications for AI voice clones.
                        • Site Conflicts Of Interest (100%)
                          None Found At Time Of Publication
                        • Author Conflicts Of Interest (100%)
                          None Found At Time Of Publication

                        67%

                        • Unique Points
                          • Scarlett Johansson rejected OpenAI’s CEO Sam Altman’s request to voice their new ChatGPT model ‘Sky’ twice.
                          • OpenAI released the new model with a voice similar to Johansson before she could decline the second time.
                          • Johansson was shocked, angry and in disbelief when she found out about the use of her voice without consent.
                        • Accuracy
                          • OpenAI claimed that the voice belonged to a different professional actress using her own natural speaking voice and was not intended to resemble Johansson’s.
                          • Scarlett Johansson alleged that OpenAI used her voice without permission for a new AI assistant tool.
                        • Deception (10%)
                          The article contains editorializing and pontification from the author. The author expresses her opinion that OpenAI's actions were 'cringe-worthy' and 'mind-blowing'. She also makes assumptions about the motivations of OpenAI's CEO, Sam Altman. The author also uses emotional manipulation by describing Johansson as being 'shocked, angered and in disbelief'. The article also contains selective reporting as it only reports details that support the author's position. For example, it mentions that two employees left OpenAI due to safety concerns but does not mention any counterarguments or explanations from the company.
                          • The AI’s debut was both mind-blowing and cringe-y.
                          • She was ‘shocked, angered and in disbelief’ that Altman would use a voice ‘so eerily similar’ to her own.
                          • Of course, by the end of the movie, the AI’s have decamped to some other plane of existence, leaving the humans, with all their messy human needs, to take care of one another once again.
                        • Fallacies (85%)
                          The author makes an appeal to authority by mentioning Sam Altman's apology and his reputation as the face of 'responsible AI'. The author also uses inflammatory rhetoric by describing OpenAI's actions as 'cringe-worthy' and 'mind-blowing'. There is a dichotomous depiction of OpenAI, where they are described as both impressive for their technological advancements and criticized for their disregard for intellectual property.
                          • The kind of diplomatic candor that’s helped make Altman, a 39-year-old billionaire, the face of ‘responsible AI.’
                          • But before she had a chance to say no a second time, OpenAI released the new model with a voice that sounded strikingly similar to Johansson’s.
                          • The rift reflects a broader anxiety among artists, academics and even some AI pioneers over the speed at which tech companies are building and releasing AI tools to the public, with seemingly little regard for intellectual property and safety concerns.
                          • Of course, by the end of the movie, the AI’s have decamped to some other plane of existence, leaving the humans, with all their messy human needs, to take care of one another once again.
                        • Bias (90%)
                          The author expresses a clear bias against the tech industry and specifically OpenAI, implying that they are careless with intellectual property and safety concerns. The author also uses language that depicts the tech industry as being made up of young, white, and male individuals whose biases are getting baked into AI. This is an example of ideological bias.
                          • It also points to one of the original sins of AI that its developers have yet to resolve: that all of these products are being greenlit, funded and built by the 0.01% of Silicon Valley – a largely young, white and male cohort, whose natural human biases are getting baked in to the AI in ways that even they don’t fully understand, or are aware of.
                            • The rift reflects a broader anxiety among artists, academics and even some AI pioneers over the speed at which tech companies are building and releasing AI tools to the public, with seemingly little regard for intellectual property and safety concerns.
                            • Site Conflicts Of Interest (100%)
                              None Found At Time Of Publication
                            • Author Conflicts Of Interest (100%)
                              None Found At Time Of Publication