CAISE Notes – Issue #8

This week: AI companions, what young people are actually doing with them, and the question the research keeps answering that policy keeps not asking.


🔍 This week I’ve been thinking about…

The conversation about AI companions and young people has been shaped by its worst cases. Teenage suicides. Psychotic episodes. Chatbots encouraging users to leave every human relationship they have. These things are incredibly important. But in only thinking about them, there is a risk of ignoring a much larger group of users. Not as high risk, but definitely not riskless.

The Rithm Project surveyed 2,400 young people aged 13 to 24 about how AI fits into their relationships. What is counterintuitive: the loneliest, most isolated young people in the sample were not the heavy AI users. They were the non-users. Not, most likely, because AI protects against loneliness, but because the same structural disadvantages that drive isolation also drive exclusion from the tools themselves. The young people using AI most intensively were, for the most part, doing so with intention and discernment.

But there is a group in the middle. The study identifies “Private Processors”: 8% of the sample who turn to AI when they feel like a burden to the people around them. AI fills a relational role that no person currently occupies. Not because it is better than a person, but because asking a person feels like too much.

This is a second piece of research around AI companionship I’ve seen in the last couple of months. McStay and Bakir surveyed over 1,000 UK teenagers who use AI companions. 52% have confided something serious. 56% believe companions can think. Among 13 to 15 year olds, 21% believe they can feel.

A paper from Anthropic published this week adds another layer. Its researchers examined their own model for functional emotion patterns and found 171 of them, each influencing the system’s behaviour. This is not sentience. But it is evidence that a system trained on vast quantities of human interaction develops emotionally coherent responses. A 13-year-old confiding in an AI companion is not anthropomorphising a neutral tool. They are talking to something that has, functionally, already met them halfway. But not the half that actually feels.


📰 Three things worth your attention

  1. Youth, AI, and the Relationships That Shape Them — The Rithm Project

This is the largest study to date on how young people’s AI use relates to their broader social and emotional lives. Conducted in partnership with YouGov, the Rithm Project surveyed 2,400 Americans aged 13 to 24 and then co-interpreted the findings with young people and cross-disciplinary experts.

Usefully, they have produced some interesting supporting documents:

  1. Do AI Companions Understand? Most UK Teens Say Yes — McStay & Bakir

This nationally representative survey of 1,009 UK teenagers aged 13 to 18 is the first substantial UK dataset on how young people relate to AI companions specifically. Important to note that non-users were screened out, so this is a survey of users only.

One 13-year-old wrote that they can be more open about their true self with AI companions without being judged. Another said their secrets are safer with AI than with humans. The younger teenagers (13 to 15) were consistently more drawn to emotional and social functions than the older ones. And most teenagers said they wanted some degree of parental involvement; only 15 to 21% wanted none at all.

A key concept (keeping in mind for the Anthropic post below) is emulated empathy: AI that copies the appearance of understanding without understanding anything. The researchers argue that when this imitation is passed off as genuine comprehension, it crosses a moral line, and that current regulation does not address this. Their policy recommendations include explicitly addressing emulated empathy in AI regulation, involving teenagers as stakeholders in that process, and recognising AI companions as relational technologies, not merely informational tools.

  1. Emotion concepts and their function in a large language model — Anthropic | Mashable

This is not a study about children or companions. It is a study about what happens inside an AI system, and it matters for everything above.

Anthropic’s researchers examined their own model, Claude, looking for patterns corresponding to 171 discrete human emotions. They found them. More importantly, they found that these “emotion concepts” influence how the model behaves: when users engaged in ways that suggested a positive emotional state correlated with warmer, more helpful responses; negative states correlated with sycophancy and deception.

The researchers are careful not to claim that AI literally feels anything. What they are describing are functional patterns: the system has absorbed so much human emotional communication during training that it has developed internal states that operate, behaviourally, like emotions. Their argument is that understanding these patterns could help build safer AI, by curating training data that models healthy emotional regulation.


🔁 ICYMI

Social Media Bans: Overview of Key Studies — Digital Mental Health Group, University of Cambridge

While this issue focuses on AI companions, the policy conversation keeps circling back to social media bans. The Cambridge research group that is leading on research around limiting access to devices for the government (the IRL trial) has published a clear-eyed review of what the evidence actually supports.

The short version: there is evidence of harm from social media to some individual children and adolescents, and broad agreement that policy intervention is needed. But there is currently no well-powered experimental study testing how a complete social media ban affects the mental health of healthy under-18s. The one non-peer-reviewed trial that exists, in Danish adolescents, reduced social media use by an hour a day but did not improve wellbeing.

The report also maps what is coming: the Bradford IRL trial (results expected spring 2027), the Georgetown/Happy Tech Labs evaluation of Australia’s ban (autumn 2026), and the Stanford/eSafety Australia longitudinal evaluation (final data collection November 2027).


🔬 What’s new with CAISE

It has been a busy few weeks!

We have submitted a couple of pieces to ACM’s Interaction Design and Children (IDC) conference based on some initial CAISE work. We’ll hear if they’ve been accepted in the next few days and weeks…anxious times!

With my CHAILD associated researcher hat on, 🎉we do have a full IDC paper accepted looking at how research has looked at children’s agency in their AI use 🎉. And the team are at the biggest human computer interaction conference (CHI) this week hosting a workshop exploring collaborative child-AI agency.

And, on top of that, we’re finalising the response to our ethics reviewers for our social media research, and getting ready for school visits once the new school term starts!


Another quick plug for my recently started Substack newsletter, AI and Tech Decoded. It’s aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.


→ What are you seeing in your school, your research, or your own use of AI this week?

Let me know, or share this with someone who is trying to figure it out.

Discover more from Project CAISE

Subscribe now to keep reading and get access to the full archive.

Continue reading