Young people are increasingly using AI like a friend or therapist, asking it for advice or confiding in it about problems. But is this healthy? What is driving this behavior?

Do Better Team

As more young people turn to AI for advice and comfort, questions arise about what using AI as an emotional support system says about our social relationships. A growing sense of loneliness and unmet emotional needs—combined with the accessibility, anonymity, and non-judgmental nature of AI—is now reshaping social connections as we have always known them.

These situation was addressed at the 4YFN event in Barcelona in a discussion led by Liliana Arroyo, Director of the Chair for Socially Responsible Digital Innovation at Esade Business School. Arroyo sat down with Alison Lee, Chief R&D Officer at The Rithm Project, and researcher Marisol Jiménez to explore what AI might mean for human relationships, particularly for younger generations. 

A recent Rithm Project survey of 2,400 young people aged 13 to 24 in the US looked at how youth are using AI tools, and what their interactions teach us about their social lives, emotional wellbeing, and sense of human connection.  

Loneliness in the age of intelligent machines 

This use of AI for social support isn’t happening in isolation. The bigger picture includes a global increase in loneliness. In the US, studies have uncovered that youth loneliness has been rising since the early 2010s. In Spain, 87 per cent of young people surveyed reported experiencing loneliness. At the same time, generative AI has evolved to be able to hold conversations that feel almost human.  

The issue isn’t a lack of friends or family, but the quality of those relationships and the degree of emotional comfort they provide

"The Rithm Project started when we saw two incredible forces about to collide: the epidemic of loneliness and the rise of generative AI," explains Lee. “Young people are spending more time alone than ever before, and less time in community with their peers. And yet on the other side, we saw the rise of this technology, the generative AI, specifically large language models, that was really poised to reshape human relationships.” 

Who is actually turning to AI for emotional support? 

An unexpected finding from the Rithm Project’s study is that many people who feel lonely are not using the social connections they already have. So, the core issue isn’t about not having enough friends or family, but rather the quality of those relationships and the degree to which individuals feel comfortable seeking help or sharing their concerns. 

“We started with the implicit hypothesis that the young people who were the most socially isolated, the ones with the fewest friends and with the fewest resources, would be the ones who would be turning to AI in the most intimate ways, in the most relational ways,” says Lee. 

But the study found that many of the young people using AI for advice, friendship, and emotional support were “healthy in terms of the richness of their social biome and their mental health”, according to Lee. 

The key issue is the quality, not the quantity, of personal human relationships a person has. Feelings of being a burden, fear of judgment, or uncertainty about how others might respond can create barriers to meaningful conversations. 

Jiménez gave an example that highlights the convenience of using AI: "If I'm crashing out at two in the morning in my dorm room, I don't want to wake my best friend up because I don't want to be a burden to her with my problems." 

In this sense, AI can fill gaps that already exist in social relationships, offering a form of interaction that feels accessible and immediate. 

Designing ‘pro-social AI’ 

If people, and young people in particular, feel more inclined to seek practical and emotional guidance from AI, this raises questions about how these systems should be designed—especially when users do not feel able to turn to friends or family. 

The Rithm Project suggests a framework of five governing principles to guide digital systems and ensure they strengthen rather than weaken human relationships. 

The aim is to shift AI design from pure engagement metrics and toward supporting human wellbeing and social connection

The first principle is transparent artificiality. AI should clearly communicate that it is an artificial tool or technology rather than a sentient human companion. Lee explains that AI could establish stronger boundaries, for example, when giving mental health advice.  

Another important principle is productive friction. Currently, AI tends to be sycophantic, agreeing with or validating the user’s views to appear helpful or pleasant. But this creates a dangerous echo chamber that can reinforce bias. Lee argues that AI should import “meaningful friction” at key moments during a person’s engagement—challenging their thinking and opening their viewpoints. 

A third principle focuses on real-world transfer, ensuring that what people learn through digital tools supports their offline lives. Additional principles address equitable representation in AI training data and stronger approaches to trust and safety for vulnerable users. 

The goal of these principles is to move AI design beyond engagement metrics so that it can purposefully support human wellbeing and social connection. 

Why youth voices matter in designing technology 

Importantly, the research behind the survey included input from not only educated and experienced professionals, but also young students. 

The Rithm Project forged intergenerational groups that worked together to analyse the data and discuss its implications. As Jiménez explained, youth participation was not limited to providing feedback after decisions had already been made. Instead, young people were involved in creating the research questions and contributing their own perspectives throughout the process. She says, "Youth co-creation is not just asking for feedback. It is building research together." 

Creating this type of collaboration requires careful preparation, including creating spaces where participants feel comfortable speaking openly and contributing their ideas. This results in a richer understanding of how social dynamics relate to technology. 

Technology alone cannot solve loneliness 

The increasing use of AI and the rise of loneliness are not AI’s fault. People feeling more isolated is a symptom of wider societal problems. Loneliness cannot be addressed by technology alone. "Human connection is everyone’s problem. It’s not just tech’s problem to fix," says Lee. 

The research highlights the importance of looking beyond digital platforms to the broader environments where young people live, learn, and socialize. Schools, families, community organizations, and policymakers all have a role to play in making people feel included and connected to other humans. 

But many physical and social spaces that once provided spaces to socialize have become less accessible. Public gathering spaces, youth programs, and other community venues have declined in many areas, leaving fewer places where young people can spend time together. 

Consequently, Lee says that “young people are turning to tech to supplement or meet real needs that aren't being met in real life.” 

And this is not just an issue for the tech industry to solve. Educators, parents, and community leaders all have a stake in understanding why young people sometimes turn to digital tools for support. “Why only fix tech?” asked Arroyo. “Why don’t we redesign our social, political, environmental, and legal systems to be truly pro-social?” 

Future opportunities 

While the research sheds light on the challenges of developing a truly fair and equitable AI, the speakers agreed that future development still has the opportunity to become more pro-social. 

"The future is not fixed," says Lee. “Elevating public opinion is getting results. The tech bans for youth in Australia, France, Denmark, Portugal and Spain are a clear message to the industry. And what also makes me hopeful is that I’m seeing the youth themselves trying to change public opinion—advocating for stronger protections.” 

The goal should not simply be to limit the harm AI can do, but to imagine how it could support healthier forms of connection. Moving from engagement to belonging may require changes not only in technology but also in the social environments that determine how people relate to one another. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.