
FTC Targets Facebook: Is Your Family’s Data Building Unsafe AI Worlds for Your Child?
Categories: Data, Data Privacy, Data Safety, Digital Footprint, Facebook, Government, Meta, Social Media, Social Media Cleanup, Social Media Management
The Federal Trade Commission issued compulsory 6(b) orders to seven firms that run consumer AI chatbots, seeking details on testing for youth harms, guardrails for teen use, data handling, and monetization practices.
Sources are listed at the end of this article.
In a move that signals a major turning point for digital regulation, Washington is taking a hard look at the unchecked expansion of Big Tech into the lives of children. Two explosive developments are unfolding: startling whistleblower testimony against Meta regarding its virtual reality platform, and a sweeping investigation by the Federal Trade Commission (FTC) into the safety and data practices of AI chatbots from industry giants.
At the heart of both stories is a single, troubling question: are tech companies prioritizing profits and data collection over the safety and privacy of their youngest, most vulnerable users?

A Two-Front Battle: VR Worlds and AI Companions Under Scrutiny
The tech landscape is being challenged on two critical fronts, revealing a pattern of potential negligence when it comes to protecting minors online.
Whistleblowers: Meta Ignored Dangers to Children in the Metaverse
First, a whistleblower testified before the U.S. Congress, alleging that Meta was aware of significant safety risks to children and teens in its Horizon Worlds virtual reality platform but failed to act. The testimony painted a grim picture of the metaverse as a space where young users could be exposed to virtual sexual harassment, grooming, and other harmful content with inadequate safeguards in place.
The allegations suggest that in the race to build and monetize the metaverse, the fundamental safety of children was treated as an afterthought. This raises serious concerns about how emerging digital spaces are being governed and whether the companies building them can be trusted to protect minors.
The FTC Launches a Sweeping Probe into AI Chatbots
In a parallel development, the FTC has launched a wide-ranging inquiry into the developers of generative AI chatbots, sending orders to major players like Google (Alphabet), Meta, OpenAI, and others.
The investigation aims to uncover how these companies are using vast amounts of personal data to train their artificial intelligence models. Regulators are deeply concerned about the specific risks to children:
- Data Privacy: What personal information is being collected from children by these AI companions and chatbots, and how is it being used?
- User Safety: What measures are in place to prevent the AI from generating harmful content or engaging in manipulative interactions with kids and teens?
- Transparency: Are companies being honest about the data they are scraping from the internet, including content created by minors – and the potential risks their products pose?
This probe isn’t just about current interactions. It’s about the very foundation of modern AI. These large language models (LLMs) are trained on a massive trove of data, much of it scraped from public websites, forums, and social media platforms. Often, that data is obtained through ethically questionably methods. Every public comment, every server chat, every tweet from a young person becomes potential fuel for these systems.

Your Child’s Digital Footprint: A Permanent Record?
Online child safety is incredibly important, and often insufficiently prioritized. We’ve covered this before, but with recent developments you should take another look.
For years, children and teens have grown up sharing their lives online, creating a digital footprint without fully understanding the long-term consequences. These recent events reframe that history entirely. The candid thoughts a teen shared on Twitter (now X), the niche interests they discussed on Reddit, or the messages sent on Discord aren’t just fleeting moments; they are now permanent data points for training complex AI systems.
This raises an urgent question for every parent: What has your child already shared online that could be used to build their digital profile for years to come?
Unlike adults, children often don’t have the foresight to consider how their online activity today could impact them tomorrow. Youthful mistakes, passing phases, or personal vulnerabilities shared online can become part of a permanent digital record, used to train AI that may interact with them or even influence their opportunities later in life.

Taking Back Control of Your Child’s Digital Life
In an era where a child’s past can directly influence their AI-driven future, proactive digital hygiene is no longer optional, it’s an essential part of modern parenting. We can no longer assume that tech companies have our children’s best interests at heart. The power to protect them must be in our hands.
The first step is managing their digital footprint and teaching them to do the same. It’s about consciously deciding what information about your child should exist online.
This is where tools built for privacy become critical. Redact, for example, is designed to give you and your child control over their online content. It allows you to automatically and securely delete old posts, messages, and comments across dozens of platforms like Discord, Reddit, Facebook, and Twitter. You can help your child clean up their entire history from a specific service or set rules to remove content on a schedule, ensuring their digital footprint reflects who they are today, not who they were five years ago.
As regulators in Washington fight the big battles, parents can take immediate action. Protecting our children in the digital age begins with taking control of their data.