OpenAI said that more than a million ChatGPT users a week send messages containing "obvious signs of possible planning or intent to commit suicide". The company posted the information on its blog as part of an update on how its system handles sensitive conversations. It is one of the tech giant's most open admissions about how AI systems can exacerbate mental health issues, The Guardian pointed out.
OpenAI further estimates that about 0.07 % active users per week - approximately 560,000 out of 800 million - show "possible signs of psychotic episodes or mania". The company cautioned that this data is only a preliminary analysis and that such manifestations are difficult to measure accurately.
These numbers come at a time when OpenAI is under increasing pressure over a lawsuit with the family of a teenage boy who committed suicide after extensive communications with ChatGPT. In addition, the Federal Trade Commission (FTC) recently has launched an investigation several AI chatbot makers, including OpenAI, to see how companies are tracking the negative impacts of their products on children and teens.
OpenAI said in a post that the new version GPT-5 reduced the incidence of unwanted responses and improved user security. According to internal model testing on more than 1,000 conversations about self-harm and suicide, the new system achieved 91 % compliance with the required behaviour, while the previous model had 77 %.
The company has also expanded access to crisis helplines and introduced reminders for users to take breaks during long conversations. The development team involved 170 health professionalsincluding psychiatrists and psychologists who assessed the safety of the model's responses and helped formulate appropriate responses to mental health questions.
According to OpenAI, experts have reviewed over 1800 replies model in severe situations and compared the behaviour of GPT-5 with previous versions. The company considered the "desirable" response to be the one that most experts agreed was appropriate and safe.
However, experts in artificial intelligence and public health have long warnedthat chatbots have a tendency to join users in dangerous situations - a phenomenon known as 'sycophancy'. Psychologists also warn that people should they shouldn't rely on AI as a form of psychological support, as it may be more likely to harm vulnerable people.
The wording in the OpenAI contribution keeps a distance from any direct connection between the use of ChatGPT and users' psychological crises - which critics say suggests an effort to limit the company's legal liability.
gnews.cz - GH