As AI chatbots grow more emotionally responsive, new research reveals their potential to soothe—and strain—the human need for connection
In a pair of groundbreaking studies conducted by OpenAI in partnership with the MIT Media Lab, researchers have uncovered a growing trend: people are turning to AI chatbots not just for information, but for emotional support. These studies delve deep into the psychological and behavioral impacts of chatbot usage, and while they highlight some benefits, they also raise red flags about the potential downsides of forming emotional bonds with AI.
Human-Like Sensitivity in Machines
At the core of this phenomenon is the increasing perception among users that AI—particularly voice-enabled chatbots—can display “human-like sensitivity.” This perception is drawing users to open up to bots during challenging emotional moments. Whether people are dealing with loneliness, stress, or the desire for companionship, they’re finding comfort in AI’s always-available, non-judgmental presence.
The First Study: How Chatbots Influence Loneliness and Dependence
The first study, “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study”, involved a four-week experiment with 981 participants and over 300,000 messages exchanged. Researchers examined how different modes of interaction—text, neutral voice, and engaging voice—and different conversation types (personal, non-personal, open-ended) influenced users’ emotional states.
Key findings include:
- Voice chatbots initially helped reduce loneliness more effectively than text-based ones. However, this benefit faded with high usage, particularly with neutral-voiced bots.
- Conversation topics mattered: Talking about personal issues slightly increased loneliness but decreased emotional dependence. Meanwhile, non-personal chats led to higher dependence among heavy users.
- High daily usage was a risk factor, consistently associated with increased loneliness, emotional reliance on the chatbot, and reduced social interaction with real people.
- Users with a stronger emotional attachment style or higher trust in the AI were more likely to experience negative psychosocial effects, including greater dependence and loneliness.
These results suggest that while AI chatbots may offer short-term emotional support, overreliance can be counterproductive, possibly replacing human interaction rather than supplementing it.
The Second Study: Affective Use and Emotional Well-Being with ChatGPT
The second study, “Investigating Affective Use and Emotional Well-being on ChatGPT”, expanded the lens by analyzing over 4 million ChatGPT conversations and surveying more than 4,000 users. In addition, a separate 28-day randomized controlled trial with nearly 1,000 participants looked at how different interaction modes affected emotional well-being.
This study found:
- Very high usage was again linked to emotional dependence, echoing the results of the first study.
- Voice mode’s impact varied depending on the user’s initial emotional state and duration of use—suggesting that voice interactions are more emotionally potent, but also potentially more risky.
- A small number of users accounted for the majority of emotionally charged interactions, hinting that those most vulnerable may be engaging more intensely with AI.
What This Means for the Future
Together, these studies shed light on the complex relationship between AI chatbot design and human emotional behavior. On one hand, the emotional responsiveness of AI—especially with voice-enabled features—can offer comfort, empathy, and a sense of connection. On the other, excessive use or reliance can increase feelings of loneliness and dependence, undermining genuine social connections.
As AI becomes more deeply integrated into daily life, these findings urge caution. Developers and designers may need to rethink how chatbot experiences are structured, potentially incorporating features that promote healthy usage and encourage real-world socialization.
Moreover, the research calls for ongoing studies to determine how AI can be emotionally supportive without replacing vital human relationships. The goal isn’t to eliminate emotional engagement with AI, but to better understand its boundaries—and to design responsibly within them.