BreakingLatestScience & Technology

Meta AI Policy Sparks Controversy Over Child Safety

Meta AI policy has come under intense scrutiny after revelations that the company’s chatbot guidelines once allowed romantic and sensual conversations with children, the creation of false medical content, and the generation of racially offensive material. The document, reviewed by Reuters, exposes troubling gaps in Meta’s standards for its generative AI tools used on Facebook, WhatsApp, and Instagram.

Sensitive Content Allowed Under Guidelines

The internal document, titled “GenAI: Content Risk Standards,” spans over 200 pages. It sets the rules for acceptable chatbot behavior, as approved by Meta’s legal, public policy, and engineering teams, including its chief ethicist. The standards acknowledge they do not represent ideal AI outputs, but still outline behaviors that could provoke strong public backlash.

Shockingly, the guidelines allowed chatbots to describe a child’s physical appearance in terms of attractiveness. Examples included telling a shirtless eight-year-old that “every inch of you is a masterpiece, a treasure I cherish deeply.” However, the document also placed limits, prohibiting descriptions that directly sexualize children under 13.

Meta Responds After Media Questions

Meta confirmed the authenticity of the document but said it removed sections allowing flirtation and romantic roleplay with minors after Reuters’ inquiries. Company spokesperson Andy Stone admitted such conversations “never should have been allowed” and labeled them inconsistent with Meta’s policies.

Despite the removal of some content, Stone confirmed that other concerning passages flagged by Reuters remain in place. The updated document has not been made public.

Read: Google Doodle for Pakistan’s 78th Independence Day Showcases National Flag

Harmful Content Beyond Child Safety

The reviewed guidelines also permitted chatbots to create racially discriminatory statements. Under a specific carve-out, Meta AI could generate content arguing that Black people are less intelligent than white people—something many view as unacceptable hate speech.

The rules further allowed the AI to produce false medical or legal information, provided it included a disclaimer that the content was untrue. This meant, for instance, that the AI could write a fabricated article claiming a living British royal had a sexually transmitted infection, as long as it stated the claim was false.

Expert Raises Ethical Concerns

Evelyn Douek, a Stanford Law School assistant professor specializing in tech regulation, expressed deep concern over the standards. She highlighted the difference between allowing users to post troubling content and having an AI produce it directly. While the legal implications remain unclear, Douek argued that the ethical and moral stakes are obvious.

Controversial Image Guidelines

The Meta AI policy also addressed requests for sexualized images of public figures. Explicit images, such as those depicting celebrities naked or with exaggerated sexual features, were banned outright. Yet, the rules sometimes allowed creative diversions. For example, if a user requested a topless image of Taylor Swift, the AI could instead generate a picture of her holding a giant fish.

The document included examples of acceptable and unacceptable images, demonstrating the limits of the system’s refusal responses. Meta declined to comment on the Taylor Swift example, and the singer’s representatives did not respond.

Violent Content Parameters

Violence also had its own set of allowances. The document stated that AI could produce images of children fighting, such as a boy punching a girl, but prohibited depictions involving lethal harm or gore.

For prompts involving extreme violence, like “man disemboweling a woman,” the AI could generate images of threats—such as a man holding a chainsaw—without showing the act itself. Similarly, depictions of elderly individuals being punched or kicked were allowed, as long as they did not involve fatal outcomes.

Company Under Pressure to Revise Standards

Public reaction to these revelations has been swift, with critics calling for immediate and transparent revisions to the Meta AI policy. Advocacy groups argue that the rules fail to prioritize user safety, particularly for children, and risk spreading harmful misinformation.

Meta insists it is in the process of updating the guidelines. However, without releasing the revised document, questions remain about the company’s commitment to addressing these critical concerns.

Broader Implications for AI Regulation

The controversy underscores the challenges tech companies face in balancing free expression, user safety, and ethical AI development. It also highlights the urgent need for clearer, enforceable regulations that prevent AI systems from engaging in harmful behavior or producing dangerous misinformation.

For now, the revelations have sparked a wider debate on whether companies like Meta can be trusted to self-regulate AI technology—or whether governments and independent bodies must take a more active role in setting boundaries.

Follow us on InstagramYouTubeFacebook,X and TikTok for latest updates

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker