Openai promises to launch parental safety tools for chatgt ‘within the next month’ after concern of chatbot -related deaths

Chatgpt Creator Openai said Tuesday will launch a new set of parental checks “Within the next month”-a late clash that follows a series of worrisome disturbing deaths associated with the titles, associated with popular chatbot.

Last week, officials accused Chatgpt of allegedly encouraging Stein-Eerik Soelberg’s paranoid fraud, a 56-year-old technology industry veteran who killed his 83-year-old mother and then after being convinced his mother was plotting against him. At one point, chatgt told SOELBERG that it was “with [him] to the last breath and beyond. “

Elsewhere, the family of California’s 16-year-old son Adam Raine sued Openai claiming that Chatgpt gave their son a “step-by-step book” on how to kill himself, even advising him on how to tie a nose and praise his plan as “beautiful”, before taking his life on April 11.

Stein-American Soelberg, 56, killed his 83-year-old mother Suzanne Adams before killing herself at her home in Connecticut, police said.
ERIK SOELBERG/Instagram

Openai, led by CEO Sam Altman, said he was making “a concentrated effort” in improving support features. This includes controls that allow parents to link their accounts to their teens account, apply appropriate age restrictions, and receive notifications whether their teen was in “acute concerns”.

“These steps are just the beginning,” the company said in a blog post. “We will continue to learn and strengthen our approach, led by experts, with the intention of making the chatgpt as useful as possible.”

Chatgpt allegedly sparked Stein Erik Soelberg’s deceptions that his mother was plotting against him.
Instagram/eriktheving1987
Matt and Maria Raine, Adam Raine’s parents, who died suicide in April 2025, claim a new lawsuit against Openai that the young man used chatgpt as his suicide coach. “ Nbc

A lawyer for the Raine family exploded the latest Openai announcement, saying the company should “attract” chatgpt from the market, unless Altman and the state “without hesitation” is safe.

“Instead of taking emergency action to attract a well -known dangerous product, Openai made unclear promises to do better,” Jay Edelson’s leading lawyer said in a statement.

The artificial intelligence giant previously said that he has collected a “Welfare Expert Council and he” as part of his plan to build a comprehensive response to security concerns over the next 120 days.

Matt and Maria Raine, Adam Raine’s parents, who died suicide in April 2025, claim a new lawsuit against Openai that the young man used chatgpt as his suicide coach. Raine family

But Edelson destroyed the company’s efforts as very little, too late – and is unlikely to solve the problem.

“Today, they doubled: promising to gather a team of experts,” Repeat with the guess “how chatgt responds to people in crisis and bring out some parental checks. They promise to return to 120 days,” Edelson added. “Don’t believe it: This is nothing but the Openai crisis management team trying to change the topic.”

Openai’s blog post did not directly refer to incidents that included Raine and SOELBERG – which are only two examples of chatgt -related security incidents and other rival chatbots, such as those offered by Meta and Character.ai.

Openai, led by CEO Sam Altman (in the photo), said he was making “a focused effort” in improving support features, including controls that allow parents to link their accounts to their adolescence and more. Reuters

In a special post last week, Openai admitted that he is increasing efforts after “the latest heart cases of people using chatgpt between acute crises”.

Last year, a 14-year-old boy in Florida killed himself after allegedly falling in love with a “Game of Thrones” -the chatbot created by character.

Meanwhile, Meta faces an investigation by the Senate after an internal document revealed that the company’s instructions allowed its chatbots to be involved in “romantic or sensual” conversations with children-telling an eight-year-old without a blouse that “every inch of you is a masterpiece”. Meta said that it has since made changes to instructions.

If you are struggling with suicide thoughts or experiencing a mental health crisis and living in New York City, you can call 1-888 -nyc-Pus for free and confidential crisis counseling. If you live outside the five municipalities, you can call the 24/7 National Suicide Prevention Line in 988 or go to SuicidepreventionLifeline.org.

#Openai #promises #launch #parental #safety #tools #chatgt #month #concern #chatbot #related #deaths
Image Source : nypost.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top