A children’s AI-powered toy, called Bondu, left over 50,000 private chat logs accessible to anyone with a Gmail account. Security researchers Joseph Thacker and Joel Margolis discovered the vulnerability in early February, finding that the company’s web portal for monitoring conversations was completely unsecured.
Unprotected Data
The exposed data included children’s names, birthdates, family details, and detailed transcripts of every conversation they had with the toy. This level of access occurred without any hacking; simply logging in with a standard Google account granted full access to sensitive information.
The researchers were able to see intimate details like children’s pet names for the toy, favorite snacks, and personal preferences. The data was stored in plain sight, representing a severe breach of privacy.
Rapid Response, Lingering Concerns
Bondu addressed the issue within hours of being alerted, shutting down the portal and re-launching it with security measures. CEO Fateen Anam Rafid stated no unauthorized access occurred beyond the researchers’ discovery. However, the incident raises broader questions about data security within AI-driven children’s products.
Wider Implications
The vulnerability highlights the risks of AI toys that collect detailed user data. The exposed Bondu console used third-party AI services like Google’s Gemini and OpenAI’s GPT-5, potentially sharing children’s conversations with these companies.
Researchers also suspect the console may have been “vibe-coded” – created using generative AI tools known to introduce security flaws. The incident underscores the potential for misuse of sensitive children’s data, including risks of manipulation, grooming, or even kidnapping.
The Illusion of Safety
Bondu markets itself as a safe AI companion for children and even offers a $500 bounty for inappropriate responses. Yet, the company simultaneously left all user data completely unsecured, highlighting a dangerous disconnect between safety claims and actual security practices.
The case serves as a stark warning: AI safety measures are meaningless when underlying data protection is nonexistent. The incident has prompted researchers to reconsider the viability of AI toys in households, given the inherent privacy risks.
