A serious security flaw in Meta AI, the artificial intelligence chatbot developed by Meta, exposed private prompts and generated responses to unauthorized users. Although the issue has now been fixed, the incident highlights growing concerns over privacy risks in generative AI platforms. Meta confirmed the vulnerability on Wednesday and said there is no evidence the flaw was exploited for malicious purposes. However, experts warn such lapses could severely impact user trust as companies push ahead in the AI race.
Bug Allowed Access to Private Prompts
The vulnerability was first discovered by Indian cybersecurity researcher Sandeep Hodkasia, founder of AppSecure. On December 26, 2024, while analyzing browser traffic related to Meta AI’s prompt editing feature, Hodkasia noticed that each AI-generated response and prompt was tagged with a unique numerical identifier. By altering these identifiers, he could retrieve private data from other users — a glaring oversight that pointed to a lack of access control on Meta’s servers.
Read: ROG Xbox Ally Price Leak Sparks Buzz in Handheld Gaming Market
“The prompt numbers were easily guessable,” Hodkasia told TechCrunch. He warned that a malicious actor could automate the process to scrape sensitive user content at scale. Hodkasia reported the issue through Meta’s bug bounty program and received a $10,000 reward for his discovery.
Fix Rolled Out, But Concerns Remain
Meta said it implemented a fix on January 24, 2025, and confirmed no data breach or misuse had been reported. “We addressed the issue quickly and found no signs of abuse,” said Meta spokesperson Ryan Daniels. While the fix may have closed the specific loophole, the episode has renewed questions about the integrity of Meta’s security protocols.
This isn’t the first time Meta AI has been at the center of privacy concerns. The company’s chatbot, launched earlier this year to rival OpenAI’s ChatGPT, previously drew criticism when users accidentally made private chats public. The latest incident now raises doubts about how well tech giants are protecting personal data in the fast-moving AI space.
A Broader Industry Problem
As Meta, Google, OpenAI, and others compete to lead the generative AI market, security experts have repeatedly cautioned that rapid development often comes at the expense of strong safety mechanisms. “The industry is moving faster than the safeguards,” said a cybersecurity analyst. “We’re seeing powerful tools deployed at scale, but the systems managing them still have basic vulnerabilities.”
AI platforms increasingly deal with personal, confidential, and sensitive information — from business ideas and legal queries to health-related discussions. That makes breaches like this especially troubling, even if no malicious access is reported. It only takes one incident to erode trust permanently.
Implications for User Trust and Regulation
This Meta AI bug adds pressure on tech companies and regulators to enforce stricter security and transparency standards. As governments globally weigh rules for AI safety, this incident may become a reference point for the types of risks that urgently need addressing.
Users, meanwhile, are encouraged to be more cautious when sharing personal data with AI chatbots, especially on platforms that have not clearly communicated their data usage and retention policies. The Meta AI bug may not have caused harm this time, but it has underlined a simple truth — even the biggest companies are not immune to critical errors.
With competition intensifying, only time will tell whether innovation and security can grow hand in hand. Until then, both users and regulators will continue watching closely.
Follow us on Instagram, YouTube, Facebook,, X and TikTok for latest updates