Dark
Light

Meta patches flaw in its chatbot to keep your chats private

July 16, 2025

Meta has just patched a serious security flaw in its AI chatbot platform—a bug that could have let users peek at other people’s private prompts and AI responses. It’s a relief for anyone who’s ever worried about the security of their personal data when engaging with these systems.

The issue came to light thanks to Sandeep Hodkasia, founder of AppSecure, who discovered the vulnerability while testing how the platform handles editing of AI prompts. Meta’s system gives each prompt and its response a unique number. Hodkasia realised that by tweaking these numbers, it was possible to access data linked to other users, because Meta’s servers weren’t properly checking who was asking for the information.

After Hodkasia reported the flaw on December 26, 2024, Meta swiftly addressed the problem and even awarded him a $10,000 bug bounty. By January 24, 2025, the fix was in place, and Meta confirmed that there was no sign of mistreatment of the vulnerability. It’s comforting to see the company take prompt action when potential risks emerge.

This fix comes at a time when tech companies are racing to improve their AI technologies while balancing security and privacy challenges. If you’ve ever wrestled with concerns about your online privacy, knowing that companies like Meta are actively working to seal these vulnerabilities can be a real confidence booster.

Don't Miss