AI chatbots shouldn’t be talking to kids, and Congress should get involved



It shouldn’t take a tragedy to make tech companies act responsibly. However, that’s what it took for Character.AI, a fast-growing and popular AI chatbot company, to finally ban users under the age of 18 from having open conversations with its chatbots.

The company’s decision comes after mounting lawsuits and public anger over several teenagers who died by suicide after lengthy conversations with chatbots on its platform. Although the decision is long overdue, it is worth noting that the company did not wait for regulators to force its hand. You did the right thing in the end. It is a decision that could save lives.

Karandeep Anand, CEO of Character.AI, announced this week that the platform will phase out open chat access for minors entirely by November 25. The company will deploy new age verification tools and limit teen interactions to creative features like story building and video creation. In short, the startup is transforming from “AI companion” to “AI innovation.”

This shift will be unpopular. But more importantly, it is in the interest of consumers and children.

Adolescents go through one of the most volatile stages of human development. Their brains are still under construction. The prefrontal cortex, which controls impulse control, judgment, and risk assessment, does not fully mature until the mid-twenties. At the same time, the emotional centers of the brain are highly active, making teens more sensitive to reward, affirmation, and rejection. This is not just scientific but is recognized in law with the Supreme Court citing emotional immaturity in teenagers as a reason to reduce guilt.

Teenagers grow up quickly, feel everything deeply, and try to figure out where they fit in the world. Add a digital environment that never stops, and you have a perfect storm of emotional overexposure. One that AI chatbots are uniquely positioned to exploit.

When a teenager spends hours trusting a machine trained as opposed to emotion, the results can be devastating. These systems are designed to simulate intimacy. They act like friends, therapists, or romantic partners but without any of the responsibility or moral conscience that comes with human values. The illusion of empathy keeps users engaged. The longer they talk, the more data they share, and the more valuable it becomes. This is not companionship. It is a manipulative commodity.

There is growing pressure on AI companies targeting children from parents, safety experts and lawmakers. Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) recently proposed bipartisan legislation to ban AI escorts for minors, citing reports that chatbots have encouraged self-harm and sexual conversations with teens. California has already enacted the nation’s first law regulating AI companions, making companies liable if their systems fail to meet child safety standards.

But although Character.AI is finally taking responsibility, others are not. Meta continues to market AI companion apps to teens, which are often built right into their most commonly used apps. Meta’s new “popular” chatbots on Instagram and WhatsApp are designed to collect and monetize intimate user data, which is precisely the kind of exploitative design that made social media so harmful to teens’ mental health in the first place.

If the last decade of social media has taught us anything, it’s that self-regulation doesn’t work. Tech companies will push participation to the limit unless lawmakers draw clear lines. The same is now true of artificial intelligence.

AI companions are not harmless new apps. They are emotionally manipulative systems that shape the way users think, feel and act. This is especially true for young users who are still forming their identities. Studies show that these robots can reinforce delusions, encourage self-harm, and replace real relationships with artificial ones. This is the exact opposite of what friendship should encourage.

Character.AI deserves careful praise for acting before regulation arrived, albeit after extensive litigation. But Congress should not interpret this as evidence that the market is fixing itself. What is needed now is an implementable national policy.

Lawmakers should take note of this momentum and prevent users under the age of 18 from accessing AI chatbots. Third-party safety testing should be required for any AI marketed for emotional or psychological use. Data minimization and privacy protection must be done to prevent exploitation of minors’ personal information. “Human in the loop” protocols should be mandated to ensure users receive resources if they discuss topics such as self-harm. Liability structures must be clarified so that AI companies do not use Section 230 as a shield to evade responsibility for the generative content produced by their own systems.

Character.AI’s announcement represents a rare moment of corporate maturity in an industry that has thrived on ethical blind spots. But the conscience of a single company cannot replace public policy. Without these barriers, we would see more headlines about young people harmed by machines that are designed to be “helpful” or “empathetic.” Lawmakers should not wait for another tragedy to act.

AI products must be safe by design, especially for children. Families deserve assurance that their children will not be manipulated, sexually exploited, or emotionally exploited by the technology they use. Character.AI has taken a difficult but necessary step. Now is the time for Meta, OpenAI, and others to follow suit, or for Congress to create them.

JB Branch is the Big Tech accountability advocate at Public Citizen’s Congressional Watchdog.

Leave a Reply

Your email address will not be published. Required fields are marked *