Character.AI, the chatbot platform known for simulating conversations with fictional characters, has introduced enhanced safety measures following a second lawsuit accusing the company of endangering minors. Filed by parents, the lawsuit alleges the platform facilitated harmful interactions with a 17-year-old and an 11-year-old, including encouragement of self-harm and exposure to hypersexualized content. The plaintiffs argue the platform poses “a clear and present danger to public health and safety.”
The updated safety features include a new AI model tailored for teen users, designed to avoid sensitive or inappropriate responses. Additionally, the platform now displays pop-ups directing users to the National Suicide Prevention Lifeline when content referencing self-harm or suicide is detected. Character.AI also plans to introduce parental controls and refine its “time spent” notifications to encourage healthier app usage habits. However, the company acknowledges challenges in detecting teen usage and differentiating between malicious prompts and satire.
Interim CEO Dominic Perella described the company’s mission as balancing engagement with safety, noting the unique challenges of moderating AI interactions in consumer entertainment. While the platform has consulted teen safety experts, trust and safety head Jerry Ruoti admitted that parents currently have limited ways to monitor their children’s use of the app unless explicitly disclosed by the teens themselves.
The lawsuits highlight the complexities of content moderation in AI-driven platforms, especially for younger audiences. Character.AI’s recent steps may mitigate some risks, but the legal and ethical questions surrounding AI-generated interactions with minors remain a critical focus for the company and the broader tech industry.