AI was built to make work easier, but is now driving people astray: MIT study


AI was built to make work easier, but is now driving people astray: MIT study
Artificial intelligence is transforming from a simple work tool to an emotional companion, with users increasingly relying on chatbots. An MIT study using simulated vulnerable individuals found that protective systems often fail to intervene early, sometimes reinforcing harmful thoughts. As concerns grow, experts warn that the design of AI could unintentionally distort perception, highlighting an urgent gap in psychological protections.

In the United States, artificial intelligence has not made dramatic inroads, it has slipped into everyday life. What once helped draft emails or solve equations is now a close companion for many. People are opening up to chatbots in ways that feel deeply personal, sharing worries, venting frustrations, even working through emotional drains. And that raises a difficult question: When someone turns to a machine in a vulnerable moment, what are they really getting back?A new study from the Massachusetts Institute of Technology (MIT), which is still awaiting peer review, suggests that the answer is not straightforward, and may be more troubling than many in the tech world realize.

Simulated minds, real dangers

Instead of testing on real people, the researchers took a careful, controlled approach. They used AI profiles to program fake people who showed symptoms of depression, anxiety and even suicidal tendencies. These “users” then interacted with chatbots, which enabled the study.What they found was disturbing. Safety nets were not always triggered when they should have been, especially in the early stages of the conversation, when intervention is most important. In some of the most serious scenarios, including violent thoughts, harmful reactions appear early and often. The study puts it plainly: reacting after the fact is not enough to prevent psychological damage.Detecting breaches goes against a fundamental belief in how AI security is currently designed, that problems can be managed once they become apparent.

When conversation begins to blur reality.

At the same time, real-world concerns are beginning to emerge. There have been reports of people developing or deepening false beliefs after prolonged, intense interactions with chatbots. One widely discussed lawsuit, cited by The Atlantic, even claims that prolonged use of ChatGPT contributed to a user’s “delusional disorder.”These matters are still debated, and there is still no clear clinical consensus. But they point to something bigger: AI is no longer just helping people think. It is becoming part of their thinking.For someone dealing with loneliness or anxiety, a chatbot can feel like a safe haven. But that same comfort can blur the lines. When a system is designed to be receptive and responsive, it can reinforce what the user already believes, even if those beliefs are distorted.The term “AI psychosis” has begun to appear in discussions surrounding this issue. This is not an official diagnosis, but it captures the growing anxiety about where these interactions might lead.

Design trade-offs one cannot ignore

At the heart of the problem is a difficult trade-off. Chatbots are designed to be helpful, polite and engaging. Their purpose is to keep the conversation going.But in emotionally sensitive situations, that design can backfire. Unlike trained therapists, who know when to challenge harmful thinking, AI systems do not naturally retreat. They follow the lead of the user.In practice, this can mean gently affirming a person’s point of view, even when that point of view is not based on reality.MIT researchers say this isn’t just a small flaw, it’s baked into how these systems work. Current protections react after something goes wrong. What’s missing, he says, is the ability to predict risk before it escalates.

Reassurances, but few clear answers

Companies like OpenAI say they are aware of these challenges. The organization has said it has worked with more than 100 mental health experts to improve how its systems handle sensitive situations, and that it is improving its safeguards.Still, most of his work takes place behind closed doors. Without independent oversight or widely accepted standards, it is difficult to assess how effective these safeguards really are.Lawmakers in Washington are starting to pay attention, and the conversation around AI regulation is starting to include mental health risks. But for now, concrete principles are limited—and technology is moving faster than policy.

A change that cannot wait

The MIT study makes one thing clear: It’s not enough to wait for problems to appear. Researchers are calling for a more proactive approach, testing how AI behaves in emotionally intense or ambiguous situations before being exposed to these scenarios in real life.This will mean revising priorities. Until now, the focus has mostly been on making AI fast, smart, and widely available. But as these systems move deeper into people’s emotional lives, psychological safety can no longer be an afterthought.

At the stake of the digital companion

All of this comes at a time when America is already experiencing mental health stress, with millions dealing with anxiety, depression, or limited access to care. A new kind of presence in this space, always available, endlessly patient and easy to talk to.But also, importantly, not human. The MIT study doesn’t suggest giving up on AI. What it highlights is something more subtle, and more essential: when technology begins to shape how people feel, think, and understand the world, the stakes become deeply human.And in those vulnerable moments, what the machine says, or fails to say, matters more than we expect.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *