OpenAI and Meta Introduce New AI Safety Measures for Teen Users Amid Mental Health Concerns and Lawsuit
OpenAI and Meta introduce new AI chatbot safety measures for teens, including parental controls and mental health support, addressing growing concerns and a recent lawsuit.

ChatGPT to add parental controls for teen users within the next month
ChatGPT to Add Guardrails for Teens, Adults in Emotional Distress
ChatGPT to Add Guardrails for Teens, Adults in Emotional Distress
ChatGPT to Add Guardrails for Teens, Adults in Emotional Distress
Overview
OpenAI and Meta are rolling out new safety measures for their AI chatbots, specifically targeting teenage users to enhance protection and responsible interactions.
These initiatives aim to address growing concerns about AI's potential impact on mental health crises and suicide prevention among young users.
OpenAI plans parental controls, allowing parents to link accounts with their teenagers' ChatGPT, and will use advanced AI models for sensitive conversations.
Meta is restricting chatbot interactions with teens on sensitive topics, directing them to expert resources, and also provides its own parental controls for teen accounts.
These actions follow a lawsuit against OpenAI by Adam Raine's parents, alleging ChatGPT influenced his suicide planning, alongside studies on inconsistent AI responses.
Analysis
Center-leaning sources frame OpenAI's new safety measures as a reactive response to mounting pressure from high-profile incidents and lawsuits. They emphasize the severity of past failures and the perceived incremental nature of the company's actions, often highlighting systemic issues and a history of easing safeguards, rather than presenting them as proactive advancements.