Rabbit R1: Security Vulnerability Exposed, Hardcoded API Keys Leak Sensitive Information

Toronto, Ontario, Ontario Canada
Critical hardcoded API keys grant access to every R1 AI device response and voice replacement
Hardcoded API keys allow third-party access to sensitive information
Location data and authentication tokens at risk
Rabbitude community discovered breach over a month ago, Rabbit took action recently
Security vulnerability discovered in Rabbit R1 device
Rabbit R1: Security Vulnerability Exposed, Hardcoded API Keys Leak Sensitive Information

A security vulnerability was discovered in the Rabbit R1 device, which allows third parties to access sensitive information through hardcoded API keys. The flaw affects responses from the R1 devices via ElevenLabs API and may include location data and authentication tokens for connected applications such as Google Maps, Yelp, Azure, and more.

The Rabbitude community of developers discovered the breach over a month ago but Rabbit did not take action to secure the information until recently. As of now, it is unclear if any customer data has been leaked or compromised.

Rabbitude gained access to several critical hardcoded API keys in Rabbit's codebase, which allows anyone to read every response the R1 AI device has ever given. These keys also allow altering R1's responses and replacing its voice.

The Rabbit R1 is a standalone AI assistant device designed by Teenage Engineering, meant to help users accomplish tasks like placing food delivery orders or quickly looking up information. However, the security issue raises concerns about the safety of user data and privacy.

Rabbitude reported that Rabbit was aware of the issue but did not take action until recently. The company has since revoked some of the API keys but it is unclear if all have been addressed.

The Rabbit R1 gained popularity for its AI functionality, which often did not work effectively in our review. Users can also accomplish similar tasks using their phones instead of purchasing the device.



Confidence

85%

Doubts
  • It is unclear if any customer data has been leaked or compromised
  • The extent of the revoked API keys is unknown

Sources

95%

  • Unique Points
    • A group of developers and researchers called Rabbitude discovered API keys hardcoded in Rabbit’s codebase, putting sensitive information at risk.
    • Rabbitude could access every response ever given by R1 devices through the ElevenLabs API.
    • Rabbitude reported the breach over a month ago but Rabbit did not take action to secure the information until now.
    • As of earlier today, Rabbit still had access to the SendGrid key.
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (80%)
    The article contains selective reporting as the author only mentions the security flaw related to Rabbit and its R1 AI gadget, implying that it is a new and serious issue. However, the article does not mention that this is not the first time Rabbit has faced security concerns or that there have been previous reports of vulnerabilities in its codebase. The author also uses emotional manipulation by describing the discovery of API keys as 'much more serious' than a previous incident and implying that it could lead to 'Bad' consequences.
    • A group of developers and researchers called Rabbitude says it discovered API keys hardcoded in the company’s codebase, putting sensitive information at risk of falling into the wrong hands.
    • That is Bad with a capital b.
  • Fallacies (95%)
    The author makes several assertions in the article that are not fallacious, but there are a few instances of inflammatory rhetoric and an appeal to authority. The author uses the phrase 'That is Bad with a capital b.' to express her opinion about the discovery of API keys in Rabbit's codebase, which is an example of inflammatory rhetoric. Additionally, she quotes Rabbitude as saying that it gained access to the keys over a month ago and that despite knowing about the breach, Rabbit did nothing to secure the information. The author then states 'Following its much-hyped launch this spring, the Rabbit R1 proved itself to be a disappointment.' This statement is an appeal to authority as it relies on the opinion of Rabbitude rather than providing evidence or facts to support her assertion.
    • ]The discovery of API keys in Rabbit's codebase is Bad with a capital b.[
    • Following its much-hyped launch this spring, the Rabbit R1 proved itself to be a disappointment.
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

80%

  • Unique Points
    • Rabbitude gained access to Rabbit's codebase on May 16, 2023
    • Several critical hardcoded API keys were found in Rabbit's codebase
    • These API keys allow anyone to read every response the R1 AI device has ever given, including personal information
  • Accuracy
    • Rabbit gained access to Rabbit's codebase on May 16, 2023
  • Deception (30%)
    The article is partially deceptive. It does not disclose that the Rabbitude team is the one who reported the security issue to Rabbit and that they are also the ones publishing this information. This creates an impression that this is an independent discovery and report when in fact it's a self-report by those involved in discovering the issue.
    • The team behind Rabbitude, the community-formed reverse engineering project for the Rabbit R1, has revealed finding a security issue with the company’s code that leaves users’ sensitive information accessible to everyone.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

95%

  • Unique Points
    • A security vulnerability was found in the Rabbit R1 device.
    • Third parties can access text prompts sent through the Rabbit R1, which may contain sensitive information.
    • The flaw allows access to responses from the R1 devices via hardcoded API keys.
    • Sensitive information in responses may include location data and authentication tokens for connected applications such as Google Maps, Yelp, and Azure.
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (80%)
    The article by Andrew Romero contains selective reporting and emotional manipulation. The author focuses on the security vulnerability of the Rabbit R1 device without providing sufficient context about the nature and extent of the issue. He also uses emotive language to create a sense of urgency and alarm, implying that sensitive information is easily accessible to malicious actors. However, there is no clear evidence presented in the article that customer data has been leaked or compromised.
    • Some of the apps referenced in the report are Google Maps, Yelp, Azure, and more. It’s easy to see how some of these responses can contain information that one wouldn’t want available without a significant amount of effort.
    • It turns out, though, those transmissions are not as secure as one may think.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

81%

  • Unique Points
    • ]A group of developers and researchers called Rabbitude discovered API keys hardcoded in Rabbit's codebase, putting sensitive information at risk.[
    • Rabbit has sold 130,000 units of its R1 device worldwide.
    • Rabbit R1 is a pocketable gadget with a tiny touchscreen, a mic to speak to, and a camera to capture visual information. It connects to AI services over cellular or Wi-Fi.
    • The CEO of Rabbit expects most concerns raised by early reviewers have been addressed.
  • Accuracy
    • Rabbit R1 can be used for tasks such as interpreting parking signs and manipulating spreadsheet data.
  • Deception (80%)
    The article contains statements about the sales figures of Rabbit's R1 unit and the company's response to early reviews. While these statements are factual, they are presented in a way that creates a positive spin for the author and may manipulate readers into perceiving Rabbit in a more favorable light. The author also makes editorializing comments about Humane being called a flop by The NY Times and Rabbit not being well received by tech reviewers. These statements are not factual but rather the author's opinions, which can be considered emotional manipulation and bias.
    • He recently pointed Rabbit at the sign forest and asked: can I park here? Another is a spreadsheet of data: point Rabbit at it and ask it to transposed columns and rows, or otherwise manipulate the data, and you’ll get an email with the result.
    • Asked if he would change anything about the launch, Lyu answered in the negative, but added he would try to manage expectations better.
    • We were expecting to sell 10,000 units,” Jesse Lyu said at Collision Conference in Toronto. “We’re now at 130,00 units worldwide.”
    • We didn’t create the hype,” he said. “People are genuinely looking for something new.”
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

76%

  • Unique Points
    • Rabbitude gained access to Rabbit R1's API keys for over a month.
    • Rabbitude found sensitive user data in Sendgrid email service using rabbit@r1.rabbit.tech address.
  • Accuracy
    • Rabbitude gained access to Rabbit's codebase on May 16.
    • The hackers sent emails from rabbit@r1.rabbit.tech address to journalists.
    • Several critical hardcoded API keys were found in Rabbit's codebase
    • These API keys allow anyone to read every response the R1 AI device has ever given, including personal information
  • Deception (30%)
    The article contains selective reporting as the author only reports details that support the hacker group's claims and does not mention any potential counterarguments or responses from Rabbit. The author also uses emotional manipulation by implying that users should stop using their Rabbit R1 devices immediately due to the alleged security breach.
    • You should stop doing so immediately.
    • The team also says it has access to the ElevenLabs key, which is the system Rabbit uses for text-to-speech. That last one is particularly important to everyday Rabbit operations since it lets the hackers get a history of all past text-to-speech messages and even brick the device by deleting the voices entirely.
  • Fallacies (80%)
    The author makes an appeal to authority by quoting Rabbitude's claims without questioning their validity or providing evidence of Rabbit's response. The author also uses inflammatory rhetoric by describing the Rabbit R1 as 'malformed and half-baked machine' and 'utterly useless'.
    • Rabbitude claimed it gained access to the Rabbit codebase back on May 16.
    • The team also says it has access to the ElevenLabs key, which is the system Rabbit uses for text-to-speech. That last one is particularly important to everyday Rabbit operations since it lets the hackers get a history of all past text-to-speech messages and even brick the device by deleting the voices entirely.
    • Gizmodo contacted Rabbit early Wednesday morning for a comment, but we did not immediately hear back. The company told Engadget that it was aware of the alleged breach but was ‘not aware of any customer data being leaked or any compromise to our systems.’ Gizmodo also asked Rabbit if it has revoked any API keys, though we’ll update this post if we hear more.
    • YouTuber CoffeeZilla also broke down some of the more concerning aspects of the device, including some ‘Serious data privacy concerns’ after looking at the Rabbit’s codebase.
  • Bias (80%)
    The author uses language that depicts Rabbit as a 'malformed and half-baked machine' and 'little hares who still jump at the chance to use a Rabbit R1' which could be seen as an attempt to discredit the product or company. The author also quotes Rabbitude saying 'Rabbit knew about it and did nothing to fix it.' which could be seen as an accusation against Rabbit.
    • Rabbitude claimed it gained access to the Rabbit codebase back on May 16. The team also shares the API keys that allow the Rabbit to connect to Google Maps and Yelp, which gives the AI models access to local reviews and directions. The team also says it has access to the ElevenLabs key, which is the system Rabbit uses for text-to-speech. That last one is particularly important to everyday Rabbit operations since it lets the hackers get a history of all past text-to-speech messages and even brick the device by deleting the voices entirely.
      • Rabbitude further said, ‘This is real. Rabbit can dance around it all they like, but it is real, and this did happen. They had a month to change the keys, and they didn’t.’
        • That $200, blazing orange, minimalist AI doohickey called the Rabbit R1 promised it would become your go-to AI companion. Instead, it proved it was a malformed and half-baked machine that couldn’t match up to any of its lofty promises.
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication