Millions of private chats exposed in ‘AI Girlfriend’ app cyberleak

Apps, Cybersecurity
Share

A critical security flaw has exposed the deeply personal conversations and private data of over 400,000 users from two popular AI companion apps, “Chattee Chat” and “GiMe Chat.”

Cybernews researchers discovered the massive leak, which included millions of intimate user messages and hundreds of thousands of images, highlighting what they termed “significant security negligence” by the Hong Kong-based developer, Imagime Interactive Limited.

The data was housed on a publicly exposed Kafka Broker instance, a streaming server used for real-time data flow that was left completely unsecured – without any access controls or authentication enforced. Researchers noted that anyone with the link could connect to the app’s content delivery network and view all content sent and received by users.

The massive trove of compromised data includes over 43 million messages exchanged between users and their AI companions, much of which was highly sensitive and “virtually no content that could be considered safe for work.” Additionally, over 600,000 images and videos shared or generated by the AI models were exposed.

The leak affects users across both Android and iOS, with the majority of the customers hailing from the US. Beyond the intimate conversations and images, the breach exposed users’ IP addresses and unique device identifiers.

While no direct personally identifiable information including names and emails was leaked, these identifiers can be cross-referenced with data from previous breaches to potentially reveal real-world identities, making users vulnerable to targeted harassment and sextortion campaigns.

The exposed data also included in-app purchase logs, revealing that some highly engaged users spent as much as $18,000 on in-app currency, with the developer’s estimated total revenues exceeding $1 million. Cybernews warned that this level of engagement, together with the uploaded images, may be used by threat actors to identify, discredit, and harass users of the apps.

The vulnerability was responsibly disclosed to the developer, and the Kafka Broker instance has since been secured. However, the researchers caution that, because the exposed server was already indexed by major search engines, it would have been easy for malicious actors to access the data.

They conclude with a warning that users should be acutely aware that conversations with AI companions may not be as private as claimed, as companies hosting such apps may not properly secure their systems.

Cybernews

For latest tech stories go to TechDigest.tv


Discover more from Tech Digest

Subscribe to get the latest posts sent to your email.