META fixed a security bug that allowed Meta AI chatbot users to access and view other users' private prompts and responses generated by AI.
Sandeep Hodkasia, founder of security testing company AppSecure, told TechCrunch only that Meta paid him $10,000 in bug prize money for personal disclosure of a bug he filed on December 26, 2024.
Meta deployed the fix on January 24, 2025, Hodkasia said, and no evidence of the bug being exploited was found.
Hodkasia told TechCrunch that Meta AI has identified the bug after examining how logged in users can edit AI prompts to play text and images. He discovered that when a user edits the prompt, Meta's backend server assigns the prompt and the response generated to its AI to a unique number. By analyzing the browser's network traffic while editing AI prompts, Hodkasia discovers that it can change its unique numbers, and Meta's server will fully return someone else's prompt and the AI-generated response.
The bug meant that the Meta server didn't check properly and asked for a prompt and made sure that the response was allowed to see it. Hodkasia said the number of prompts generated by Meta's servers is “easy to guess,” so by changing the number of prompts that change rapidly using automated tools, malicious actors can now scrape the user's original prompts.
When TechCrunch arrived, Meta fixed the bug in January to confirm that the company “rewards researchers because it can't find evidence of abuse,” Meta spokesman Ryan Daniels told TechCrunch.
Bug news comes as the tech giant is in a hurry to launch and improve AI products despite the many security and privacy risks associated with use.
Meta AI's standalone app, which debuted earlier this year to compete with rival apps like ChatGPT, launched on Rocky Start after mistakenly sharing what some users thought was a private conversation with a chatbot.