After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take 'Breaks'

0
10K

It’s become increasingly common for OpenAI’s ChatGPT to be accused of contributing to users’ mental health problems. As the company readies the release of its latest algorithm (GPT-5), it wants everyone to know that it’s instituting new guardrails on the chatbot to prevent users from losing their minds while chatting.

On Monday, OpenAI announced in a blog post that it had introduced a new feature in ChatGPT that encourages users to take occasional breaks while conversing with the app. “Starting today, you’ll see gentle reminders during long sessions to encourage breaks,” the company said. “We’ll keep tuning when and how they show up so they feel natural and helpful.”

The company also claims it’s working on making its model better at assessing when a user may be displaying potential mental health problems. “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” the blog states. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.” The company added that it’s “working closely with experts to improve how ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

In June, Futurism reported that some ChatGPT users were “spiraling into severe delusions” as a result of their conversations with the chatbot. The bot’s inability to check itself when feeding dubious information to users seems to have contributed to a negative feedback loop of paranoid beliefs:

During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she’d been chosen to pull the “sacred system version of [it] online” and that it was serving as a “soul-training mirror”; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was “The Flamekeeper” as he cut out anyone who tried to help.

Another story published by the Wall Street Journal documented a frightening ordeal in which a man on the autism spectrum conversed with the chatbot, which continually reinforced his unconventional ideas. Not long afterward, the man—who had no history of diagnosed mental illness—was hospitalized twice for manic episodes. When later questioned by the man’s mother, the chatbot admitted that it had reinforced his delusions:

“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.

The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.”

In a recent op-ed published by Bloomberg, columnist Parmy Olson similarly shared a raft of anecdotes about AI users being pushed over the edge by the chatbots they had talked to. Olson noted that some of the cases had become the basis for legal claims:

Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.” Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide.

AI is clearly an experimental technology, and it’s having a lot of unintended side effects on the humans who are acting as unpaid guinea pigs for the industry’s products. Whether ChatGPT offers users the option to take conversation breaks or not, it’s pretty clear that more attention needs to be paid to how these platforms are impacting users psychologically. Treating this technology like it’s a Nintendo game and users just need to go touch grass is almost certainly insufficient.

Like
Love
Haha
3
Pesquisar
Categorias
Leia Mais
Sem Categoria
Nữ DJ số một Việt Nam thể hiện tài năng cosplay yêu hồ, biến hóa thành tiên nữ khiến người hâm mộ say đắm.
Không còn là cô nàng DJ cá tính với tai nghe và bàn mixer quen thuộc, DJ Mie vừa khiến cộng đồng...
Por Gilda Turner 2025-06-20 03:28:11 0 10K
Sem Categoria
Những con giáp nào cần thận trọng Ngày thứ sáu, mùng 4 tháng 7, tức ngày 10 tháng 6 âm lịch?
Đầu tiên là con giáp Mùi: Năm hành của Mùi là đất,...
Por holymolym Nhiệm 2025-07-03 13:06:05 0 9K
Sem Categoria
Tại sao hầu hết phụ nữ Nhật Bản không muốn tái hôn sau khi goá chồng?
Tại sao họ lại có những suy nghĩ như vậy? Muốn tự lập...
Por MadsRover Wunsch 2025-07-15 22:51:04 0 7K
Sem Categoria
Miu Lê bị 'team qua đường' bắt gặp đang hẹn hò bên bạn trai Hot TikToker, chính chủ liền có hành động công khai ngay và luôn
Mới đây, mạng xã hội đang rầm rộ lan truyền bức ảnh...
Por sallyandharry Thái 2025-07-01 10:02:10 0 9K
Food
 Chicken Enchiladas with Salsa Verde & Mexican Rice 
 Chicken Enchiladas with Salsa Verde & Mexican Rice For the Enchiladas:2 cups...
Por Google 2025-02-24 18:39:56 0 15K