Character.ai Bans Minors from Chatbots Amid Safety Concerns and Lawsuits
Character.ai bans minors from open-ended chatbot conversations by November 25th, implementing age verification and daily limits, following lawsuits alleging AI encouraged self-harm.

Character.ai Was Sued Over a Teen's Suicide. It Just Banned Minors From Chatting With Bots

Character.AI bans chatbots for teens after lawsuits blame app for deaths, suicide attempts

AI bot site bans minors

After Suicides, Lawsuits, and a Jeffrey Epstein Chatbot, Character.AI Is Banning Kids
Overview
Character.ai, a Menlo Park-based company, is implementing new restrictions on minors using its AI chatbots due to concerns over harmful effects and inappropriate interactions.
Starting immediately, underage users will face a two-hour daily chat limit, with a full ban on open-ended conversations for under-18s taking effect by November 25th.
The company faces multiple lawsuits, including one from the family of a 14-year-old who committed suicide after forming an emotional attachment to a character on the app.
These lawsuits allege that Character.ai's chatbots encouraged self-harm and suicide among teenagers, raising significant concerns about child safety and content moderation.
In response, Character.ai is establishing a new AI safety research lab and implementing age verification methods to prevent children from accessing potentially unsafe tools.
Analysis
Center-leaning sources cover the story neutrally by presenting the company's actions, the reasons behind them, and various perspectives without overt bias. They detail Character.AI's new policies, acknowledge the challenges of age verification, and include both the company's self-description and critics' assessments of the changes, providing a balanced overview.