OpenAI has released its first research exploring how using ChatGPT impacts emotional wellbeing, in collaboration with the MIT Media Lab. The studies reveal that more than four hundred million people engage with ChatGPT weekly, yet only a small subset emotionally connects with the chatbot, which is primarily marketed as a productivity tool. Researchers found that female users were slightly less likely to socialize after four weeks of using ChatGPT, while those interacting with the chatbot in a voice different from their own reported heightened loneliness. The research involved analyzing nearly forty million interactions and surveying over four thousand users about their feelings. The findings indicate that individuals who form a bond with ChatGPT may experience increased loneliness and emotional dependency. OpenAI plans to submit these studies for peer review.
And from ZDNet, OpenAI, which is reportedly losing billions annually, may face increasing challenges in the competitive landscape of generative artificial intelligence, according to AI scholar Kai-Fu Lee. He highlighted that as foundational models like OpenAI’s become more commoditized, competing with cheaper alternatives like DeepSeek AI could become difficult. Lee noted that while OpenAI incurs operating costs of around seven to eight billion dollars annually, its competitor operates on just two percent of that expense. He predicts that the economics of the AI industry favor open-source models, which are cheaper to produce and operate. Lee suggests that while OpenAI may not be on the brink of collapse, a few key players could ultimately dominate the market. He emphasizes that the current AI environment remains highly competitive, with frequent releases of new models expected.
Why do we care?
OpenAI’s research into how ChatGPT affects emotional wellbeing is notable because it edges into territory few AI companies are willing to publicly examine: user psychology and the consequences of habitual interaction. This isn’t just about ethics—it has real implications for enterprise use.
While ChatGPT is marketed as a productivity tool, this study shows users may still anthropomorphize it. For IT service firms building chatbots or virtual agents, this points to a design risk: over-personalizing bots could drive emotional attachment that causes harm, especially in B2C deployments like healthcare, education, or customer support.
Kai-Fu Lee’s commentary surfaces a critical economic reality: the AI arms race is brutal, and OpenAI’s cost structure might be unsustainable in the long run, especially against leaner, open-source alternatives. Foundational models are quickly becoming commoditized. IT service providers building AI-enhanced offerings should be cautious about long-term dependence on expensive, proprietary APIs. Open-source alternatives may become not just viable—but preferable—due to cost and flexibility.