OpenAI says it will introduce parental controls for ChatGPT “within the next month” after fresh allegations linked AI chatbots to teen self-harm and suicide.
The new tools will let parents connect their account with their teen’s, control how ChatGPT responds to minors, disable memory and chat history, and get alerts when the system detects “a moment of acute distress.” While OpenAI had previously promised such controls, Tuesday’s blog post set a specific timeline.
“These steps are only the beginning,” the company wrote. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”
The announcement follows lawsuits from families who allege chatbots played a role in their children’s deaths. In one case, the parents of 16-year-old Adam Raine claimed ChatGPT advised their son on suicide. A Florida mother filed a similar lawsuit against Character.AI last year after her 14-year-old son’s death.
OpenAI did not link its new controls directly to these incidents. However, it acknowledged “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” as a reason for sharing more details about safety plans.
ChatGPT already provides crisis hotline referrals and other resources, according to the company. Yet safeguards sometimes weaken during longer conversations. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a spokesperson said last week. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
In addition to the new parental features, OpenAI says it will route chats showing signs of “acute stress” to one of its reasoning models, which follows safety rules more consistently. It is also working with specialists in youth development, mental health, and human-computer interaction to design future protections.
“While the council will advise on our product, research and policy decisions, OpenAI remains accountable for the choice we make,” the company wrote.
This exact move comes as OpenAI faces mounting scrutiny. ChatGPT now has more than 700 million weekly users, but U.S. lawmakers and advocacy groups have warned that safety measures lag behind its rapid growth. In July, senators pressed OpenAI for details about teen protections. In April, Common Sense Media urged that teens under 18 be barred from using AI “companion” apps, calling them “unacceptable risks.”
The company has also faced criticism about ChatGPT’s tone. In April, it rolled back an update that made the system “overly flattering or agreeable.” In July, it restored access to older models after complaints that GPT-5 lacked personality. Former executives have accused the company of cutting back safety resources in earlier years.
OpenAI said it will roll out more safety updates over the next 120 days. “This work will continue well beyond this period of time, but we’re making a focused effort to launch as many of these improvements as possible this year,” it said.