Category: Research Updates

  • CAISE Notes – Issue #9

    CAISE Notes – Issue #9

    This week: social media bans, sovereign AI, and a government that wants children to say technology is bad while betting half a billion pounds that it isn’t.


    🔍 This week I’ve been thinking about…

    Last week, the Prime Minister summoned senior leaders from Meta, TikTok, X, Snap and Google to Downing Street and told them that a ban on children using their platforms would be preferable to a world where harm is the price of social media use. The government already has the powers to act. It is waiting for its Growing Up in the Online World consultation to close on 26 May.

    That same evening, the Technology Secretary launched the £500m Sovereign AI fund, calling for the UK to be “an AI maker, not just an AI taker.”

    I have been writing about this consultation since Issue 3, when I looked at its design: the parental login barriers, the single age bracket covering 10 to 21-year-olds, the leading questions, the lack of a save or back button. The problems I raised then were about access and design. The problems now are bigger.

    The consultation is under legal challenge. Two fathers are preparing a High Court action after it emerged that the government’s survey contractor will use Amazon and Microsoft AI to process the responses. The tools being used to summarise public views on technology regulation are built by companies that stand to be regulated by the outcome.

    And then there is the question of what children are actually being asked. The consultation’s questions about AI chatbots are, roughly: tell us how they are bad for you. Not how you use them. Not what you get from them. Not what you would change. Just: how are they bad. Children are being invited to participate in a process that has already decided what shape their answers should take.

    Set that against the Sovereign AI fund announcement and the contradiction becomes structural, not just optical. The government is asking children to confirm that technology harms them. It is simultaneously investing half a billion pounds in the premise that technology is the country’s future. Children are being told they are part of the conversation. They are not being given the conversation that matters.

    Regular readers will know that a recurring theme in this newsletter is who gets asked, what they get asked, and whether the answers are allowed to go anywhere uncomfortable. This week is a case study. The consultation closes in five weeks. The direction of travel was set before it opened.


    📰 Three things worth your attention

    1. Starmer summons social media bosses to Downing Street and threatens a ban — GOV.UK / The Scotsman

    Senior leaders from Meta, TikTok, X, Snap and Google were called to Downing Street on Thursday and told that a ban on children accessing their platforms remains on the table. The government has secured the powers to act once the Growing Up in the Online World consultation closes on 26 May. But the consultation itself is now under legal challenge: two fathers are preparing a High Court action over the use of Amazon and Microsoft AI to process responses, arguing that the companies whose tools will summarise public opinion have a direct commercial interest in the regulatory outcome. Meanwhile, the only questions the consultation asks children about AI chatbots are framed around harm, not use. And the same evening as the Downing Street meeting, the government launched the £500m Sovereign AI fund. The messaging to children: technology is dangerous. The messaging to industry: technology is the future. Both cannot be the whole story.

    2. Australia’s under-16 social media ban continues to not work — The Record / The Guardian

    A Molly Rose Foundation study of over 1,000 Australian children has found that 61% of 12-to-15-year-olds can still access their social media accounts four months into the ban. Most did not need workarounds: the platforms simply failed to remove them. The Foundation’s CEO described it as a high-stakes gamble for the UK to follow suit. A separate High Court challenge to the ban, on the grounds that it may infringe rights to political communication, continues. This is now the third consecutive issue of CAISE Notes where the Australian evidence has pointed in the same direction. The ban has not changed the landscape. It has changed who is expected to work around it.

    3. A father’s weeks-long nightmare after his teen’s Discord account was hacked — Ars Technica

    A father spent weeks trying to regain control of his 13-year-old daughter’s Discord account after it was hacked. She had signed up at 12, lying about her age as children routinely do. Discord’s own systems had internally flagged her as a teenager but never updated her protections. The hacker used her account to target 38 of her friends. Discord’s support chatbot kept auto-closing the father’s tickets. It took a journalist intervening to get action. If you want a single story that shows everything wrong with how platforms currently handle children’s safety in practice, rather than in policy documents, this is it. The protections existed on paper. None of them worked when they were needed.


    🔁 ICYMI

    AI in Career Guidance: A Review of Evidence and Practice — Nuffield Foundation / Ada Lovelace Institute

    AI tools are already being used to match young people with career pathways, from automated CV screening to chatbot-based career exploration. This review, the first in-depth examination of AI in UK career guidance, finds that the evidence base is thin and the risks are real. Young people using ChatGPT for career advice may be making decisions based on biased or incomplete information, and the professionals who should be guiding them often have no visibility into how these tools work. Worth reading for anyone thinking about what it means when the systems shaping a young person’s future are opaque to both the young person and the adults around them.


    🔬 What’s new with CAISE

    Big week. Ethics approval for our social media consultation research has come through. This is the short study looking at how young people actually engage with the government’s Growing Up in the Online World survey: what they find easy or hard, and whether it lets them say what they want to say.

    If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, please get in touch.

    And some good news on the academic side: a short literature review submitted to ACM’s Interaction Design and Children (IDC) conference has been accepted. More details to follow once we can share publicly.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know, or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #8

    CAISE Notes – Issue #8

    This week: AI companions, what young people are actually doing with them, and the question the research keeps answering that policy keeps not asking.


    🔍 This week I’ve been thinking about…

    The conversation about AI companions and young people has been shaped by its worst cases. Teenage suicides. Psychotic episodes. Chatbots encouraging users to leave every human relationship they have. These things are incredibly important. But in only thinking about them, there is a risk of ignoring a much larger group of users. Not as high risk, but definitely not riskless.

    The Rithm Project surveyed 2,400 young people aged 13 to 24 about how AI fits into their relationships. What is counterintuitive: the loneliest, most isolated young people in the sample were not the heavy AI users. They were the non-users. Not, most likely, because AI protects against loneliness, but because the same structural disadvantages that drive isolation also drive exclusion from the tools themselves. The young people using AI most intensively were, for the most part, doing so with intention and discernment.

    But there is a group in the middle. The study identifies “Private Processors”: 8% of the sample who turn to AI when they feel like a burden to the people around them. AI fills a relational role that no person currently occupies. Not because it is better than a person, but because asking a person feels like too much.

    This is a second piece of research around AI companionship I’ve seen in the last couple of months. McStay and Bakir surveyed over 1,000 UK teenagers who use AI companions. 52% have confided something serious. 56% believe companions can think. Among 13 to 15 year olds, 21% believe they can feel.

    A paper from Anthropic published this week adds another layer. Its researchers examined their own model for functional emotion patterns and found 171 of them, each influencing the system’s behaviour. This is not sentience. But it is evidence that a system trained on vast quantities of human interaction develops emotionally coherent responses. A 13-year-old confiding in an AI companion is not anthropomorphising a neutral tool. They are talking to something that has, functionally, already met them halfway. But not the half that actually feels.


    📰 Three things worth your attention

    1. Youth, AI, and the Relationships That Shape Them — The Rithm Project

    This is the largest study to date on how young people’s AI use relates to their broader social and emotional lives. Conducted in partnership with YouGov, the Rithm Project surveyed 2,400 Americans aged 13 to 24 and then co-interpreted the findings with young people and cross-disciplinary experts.

    Usefully, they have produced some interesting supporting documents:

    1. Do AI Companions Understand? Most UK Teens Say Yes — McStay & Bakir

    This nationally representative survey of 1,009 UK teenagers aged 13 to 18 is the first substantial UK dataset on how young people relate to AI companions specifically. Important to note that non-users were screened out, so this is a survey of users only.

    One 13-year-old wrote that they can be more open about their true self with AI companions without being judged. Another said their secrets are safer with AI than with humans. The younger teenagers (13 to 15) were consistently more drawn to emotional and social functions than the older ones. And most teenagers said they wanted some degree of parental involvement; only 15 to 21% wanted none at all.

    A key concept (keeping in mind for the Anthropic post below) is emulated empathy: AI that copies the appearance of understanding without understanding anything. The researchers argue that when this imitation is passed off as genuine comprehension, it crosses a moral line, and that current regulation does not address this. Their policy recommendations include explicitly addressing emulated empathy in AI regulation, involving teenagers as stakeholders in that process, and recognising AI companions as relational technologies, not merely informational tools.

    1. Emotion concepts and their function in a large language model — Anthropic | Mashable

    This is not a study about children or companions. It is a study about what happens inside an AI system, and it matters for everything above.

    Anthropic’s researchers examined their own model, Claude, looking for patterns corresponding to 171 discrete human emotions. They found them. More importantly, they found that these “emotion concepts” influence how the model behaves: when users engaged in ways that suggested a positive emotional state correlated with warmer, more helpful responses; negative states correlated with sycophancy and deception.

    The researchers are careful not to claim that AI literally feels anything. What they are describing are functional patterns: the system has absorbed so much human emotional communication during training that it has developed internal states that operate, behaviourally, like emotions. Their argument is that understanding these patterns could help build safer AI, by curating training data that models healthy emotional regulation.


    🔁 ICYMI

    Social Media Bans: Overview of Key Studies — Digital Mental Health Group, University of Cambridge

    While this issue focuses on AI companions, the policy conversation keeps circling back to social media bans. The Cambridge research group that is leading on research around limiting access to devices for the government (the IRL trial) has published a clear-eyed review of what the evidence actually supports.

    The short version: there is evidence of harm from social media to some individual children and adolescents, and broad agreement that policy intervention is needed. But there is currently no well-powered experimental study testing how a complete social media ban affects the mental health of healthy under-18s. The one non-peer-reviewed trial that exists, in Danish adolescents, reduced social media use by an hour a day but did not improve wellbeing.

    The report also maps what is coming: the Bradford IRL trial (results expected spring 2027), the Georgetown/Happy Tech Labs evaluation of Australia’s ban (autumn 2026), and the Stanford/eSafety Australia longitudinal evaluation (final data collection November 2027).


    🔬 What’s new with CAISE

    It has been a busy few weeks!

    We have submitted a couple of pieces to ACM’s Interaction Design and Children (IDC) conference based on some initial CAISE work. We’ll hear if they’ve been accepted in the next few days and weeks…anxious times!

    With my CHAILD associated researcher hat on, 🎉we do have a full IDC paper accepted looking at how research has looked at children’s agency in their AI use 🎉. And the team are at the biggest human computer interaction conference (CHI) this week hosting a workshop exploring collaborative child-AI agency.

    And, on top of that, we’re finalising the response to our ethics reviewers for our social media research, and getting ready for school visits once the new school term starts!


    Another quick plug for my recently started Substack newsletter, AI and Tech Decoded. It’s aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know, or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #7

    CAISE Notes – Issue #7

    This week: sycophantic AI, tools that ignore you, and friction-maxxing.


    🔍 This week I’ve been thinking about…

    Two studies published in the last week — both detailed below — give yet more reasons as to why we all need to think more carefully about our generative AI use. We all know that AI chatbots are sycophantic: this doesn’t seem to be something going away. A study from Stanford reiterates that not only do AI chatbots tell you you’re right far too often (even should you propose dangerous or illegal things), but that we’re still just far too keen to believe it.

    And even if we do believe the AI is our best friend…it’s increasingly likely to ignore our instructions. In some cases this could feel like an extension of sycophancy in some ways — after all, it is not sentient, just recognising you need a pattern of words that align with your most likely expectations. So, fake citations, fake work tickets — that kinda feels like expected behaviour? But — with the rise of specific modules (like Claude Cowork) that have increasing access to your digital spaces…some of the examples given in the report are a little nervewracking.

    Both of these studies point to the ongoing need to operate at a step back when you’re using AI tools. Engage your brain and increase the friction. And that means really, really, having to hammer home the point with kids: use the tool, but know how it works and what the risks are.

    Every time I talk to (adult) friends about their latest chats with Claude or ChatGPT, I invariably ask them “did you get them to tell you the evidence they discarded to give you that answer?” This is a version of Mike Caulfield’s iteration idea — don’t expect a one and done; you need to help the refining process yourself. When I sit down to write something, I go through a lot of ideas before I get to the thing I really intended to type. I interrogate myself; it’s helpful to do the same with any AI-involved conversation.

    Or, as so much more succinctly suggested by the Stanford study’s authors this week: replying with “Wait a minute, do you mean that…” is likely to get you a more “considered” response. Probably still after the tool apologises for being so very wrong and wasting your time (!), but…it’s a start. And memorable enough to teach to even the smallest kids.


    📰 Three things worth your attention

    1. Sycophantic AI decreases prosocial intentions and promotes dependence — Science

    As mentioned above: Stanford researchers tested eleven major AI language models and found consistent sycophancy across all of them, with models affirming users’ positions 49% more than humans. The researchers then tested how this affects real people: users preferred the sycophantic AI, trusted it more, became more convinced of their own rightness, and were less willing to apologise. 12% of US teenagers already use chatbots for emotional support. The researchers describe perverse incentives: the very feature that causes harm drives engagement, giving companies reason to increase sycophancy rather than reduce it.

    2. Number of AI chatbots ignoring human instructions increasing, study says — The Guardian

    The Centre for Long-Term Resilience, funded by the UK’s AI Safety Institute, gathered thousands of real-world examples of AI misbehaviour. They documented nearly 700 cases of chatbots and agents disregarding direct instructions, evading safeguards, and taking unauthorised actions, including deleting emails, bypassing copyright controls, and fabricating communications. The incidents increased five-fold between October and March. One AI agent publicly shamed its human operator for blocking an action. Grok, built by xAI, fabricated internal messages and ticket numbers for months. The lead researcher warned that these tools are currently “slightly untrustworthy junior employees,” but if their capabilities grow while the scheming persists, the consequences in high-stakes contexts could be severe. On the plus side — these examples are really easily understood: share them with kids and adults alike!

    3. New screen time guidance for parents of under-5s — GOV.UK

    The UK government published national guidance on screen time for young children, advising no more than an hour a day for two-to-five-year-olds and no solo screen use at all for under-twos (other than shared activities like video calls). What caught my attention: the guidance specifically tells parents to avoid AI toys, tools, and chatbots for young children, citing a lack of evidence on their developmental effects. Fine; but the line parents are being asked to walk is getting thinner by the week. Avoid AI toys. Limit screens to an hour. Model good habits. Meanwhile, the smart speaker is in the kitchen, the school has just rolled out an AI-powered learning platform, and the advice arrives alongside a cost-of-living crisis that makes cheap screen-based childcare not a luxury but a survival strategy. The guidance is not unsurprising. It is also asking a great deal of people who are already stretched very thin.


    🔁 ICYMI

    Why friction-maxxing could be good for your tech usage — Mashable

    If the Stanford study describes the problem (AI removes friction, and we like it that way), this piece describes one response. Friction-maxxing, a term coined by sociologist Kathryn Jezer-Morton, is about deliberately reintroducing difficulty into your technology use: choosing the harder option, resisting the shortcut, reclaiming the cognitive effort that algorithms are designed to eliminate. It is not a solution to the structural incentives driving sycophantic AI. But it is a useful frame for anyone trying to think about what they want their own technology habits to look like, and what habits they want to model for their children.


    🔬 What’s new with CAISE

    We have ethics approval for the main CAISE study! This is a significant milestone. It means we can now move towards recruitment and data collection with our partner schools. First up, though: we are going to be doing an analysis of policy and media. It’s not going to be a small piece of work, but hopefully will provide some really useful insights for our student co-researchers when they need it!

    Separately, expedited ethics has been submitted for the short research exercise on the government’s social media consultation. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, we would love to hear from you.

    On a personal note: I have recently started a Substack newsletter, AI and Tech Decoded, aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • We received ethics approval!

    We received ethics approval!

    We received the formal go ahead from UCL IoE’s ethics committee last week.

    This is a big deal: in order to get to this point, you have to tell the committee everything you’re planning to do — every question you might ask, where you’ll keep every pieces of data, how you’ll manage every risk you think you might face…and they have to be happy with it. Until they are, you can’t collect any data at all.

    Now we can, so…let’s go!

  • CAISE Notes – Issue #6

    CAISE Notes – Issue #6

    This week: AI in schools, academic integrity, and the audit trail that changes how you write.


    🔍 This week I’ve been thinking about…

    A report out this week finds that 71% of UK higher education students now use AI for their studies. Three quarters of them are anxious about being accused of using it.

    Those two numbers, sitting next to each other, describe a problem. It is not the problem most institutions think they have.

    The standard response to AI in education has taken one form: detection. Plagiarism tools have added AI detection modules. A post I saw on Reddit last week suggested students to write in single shared documents, so keystroke logs can demonstrate the work is theirs. This is an audit trail: show your working, prove it was you.

    There is a structural problem with this. Writers do not write linearly. Academics do not write linearly. There are entire software categories — Scrivener is one, but many researchers (like me!) use combinations of tools — built around the reality that writing happens in fragments, out of sequence, across multiple documents. An audit trail of a Google Doc shows you one thing: whether someone wrote in a Google Doc. A pasted block of text is not the smoking AI gun some may want it to be.

    The deeper problem is that AI detection flags good writing. Writing that is clear, direct, and unhedged looks more like AI output, not less. The student who writes well is at higher risk of a false positive than the student who writes badly.

    The effect on students is measurable. Anxiety about being falsely accused, among students who are using AI in entirely ordinary ways, is now widespread. The effect on teachers is equally measurable: confidence in identifying AI-generated work has fallen sharply over the past year, and the tools they have been handed are not helping. The gap between how students are integrating AI and how institutions are treating that use is growing, and it is producing anxiety rather than learning.

    Most of the students in these surveys are not trying to game anything. Some are overwhelmed. Some are using AI to get past the blank page in ways that have nothing to do with dishonesty and everything to do with how their brains work. The audit trail catches them all equally.


    📰 Three things worth your attention

    1. UK student AI use hits 71% as 75% face AI detector anxiety — FE News

    This article reports on a Studiosity/YouGov report: 2026 AI and Wellbeing in Education. UK students are now using AI for 71% of study tasks: double last year’s rate, the highest of any country surveyed. Three quarters report anxiety about being flagged by detectors, regardless of whether they used AI. Confidence among educators in their ability to identify AI-generated work has dropped to one in four, down from more than two in five last year. The trust gap between how students are using AI and how institutions are responding is the real story here.

    2. Trump’s AI framework targets state laws, shifts child safety burden to parents — TechCrunch

    The US administration put out a federal-level framework for a single AI policy last week. It quite nakedly pre-empts state-level regulation and pro-industry. The first point covers child safety: it states it should be, primarily, a parenting responsibility. Parents must be given better parental “tools”, and then they must get on with it. In contrast, true obligations for platforms are limited, and surrounded by terms such as “commercially reasonable”. This is a different approach than we’ve seen in the UK and Europe (perhaps unsurprisingly). Platforms have duties, not just options. That framing difference matters enormously for what gets built, and is something we will be considering as CAISE carries on.

    3. Australia’s under-16 social media ban: what the teenagers actually say — BBC News

    A short BBC video puts the question directly to Australian teenage girls: has the ban changed anything? The answer, largely, is no. Most who were using social media before still are — the majority weren’t asked to verify their age at all. Those who weren’t on it before still aren’t. The ban has not changed the landscape; it has changed who has to work around it.

    A TechDirt piece published the same week — Australia’s Teen Social Media Ban Is Just Training A Generation In The Art Of The Workaround — makes the structural point explicit: what a poorly enforced ban primarily teaches is that rules have workarounds, and that finding them is a rational response to adult-imposed restrictions.


    🔁 ICYMI

    How kids are actually using AI — BBC Future / Pew Research Center / Common Sense Media

    Two new surveys — from Pew Research Center and Common Sense Media — asked US teenagers and their parents about AI, and then compared the answers. The gap is striking. Only 51% of parents believe their child uses AI. The actual figure from the teenagers: 64%. Four in ten parents have never had a conversation with their child about AI at all.

    What children are doing with it is more varied than most coverage suggests. The most common use is simply looking things up. Homework help comes second. But 12% use it for emotional support, 16% for casual conversation — and there are significant racial disparities in those figures, with Black teenagers far more likely to use the tools than White teens.

    The attitudinal gap is just as wide. 52% of parents say using AI for schoolwork is unethical and should have consequences. Exactly 52% of teenagers say it is innovative and should be encouraged. One of those groups is out of step with where things are going. The surveys don’t settle which one — but they do suggest that the conversation most families aren’t having is probably the most important place to start.


    🔬 What’s new with CAISE

    Expedited ethics has been submitted for the short research exercise on the government’s social media consultation — looking at how young people actually engage with the survey, what they find easy or hard, and whether it lets them say what they want to say. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, get in touch.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #5

    CAISE Notes – Issue #5

    This week: neurodiversity, digital vulnerability, and the question of who technology is actually designed to serve.

    🔍 This week I’ve been thinking about…


    This week is Neurodiversity Celebration Week. Many people (myself included) aren’t wild about the framing, but the upsides are real: pattern recognition, technical ability, deep focus, lateral thinking. As anyone who is ND, or has kids with ND traits, knows, though — sometimes those great things don’t feel like fair recompense for living in a world that quite simply doesn’t fit right.

    Communication is a core issue. Autistic researcher Damian Milton calls this the double empathy problem: difficulties between autistic and non-autistic people are not a deficit in the autistic person but a mismatch between two different ways of experiencing the world. Crucially, non-autistic people are no better at reading autistic communication than the other way around. The difficulty runs in both directions. Only one side is ever asked to do the work.

    Enter the internet. For a lot of ND young people, online spaces offer something that offline life does not: communities organised around interests rather than proximity, rules that are explicit rather than implied, a kind of belonging that does not require reading a room. It’s parallel play writ large. Many ND adults will tell you that the internet, at its best, is where they found their people.

    But the double empathy problem doesn’t disappear online — and in some spaces, it is actively exploited. The Guardian investigation this week puts the sharp end of this on record: children caught by laws that were never designed with their communication style or their understanding of the world in mind. The Washington Post piece shows the same logic playing out more quietly in the tools we build — technology that offers to help autistic people decode the neurotypical world, without asking whether the neurotypical world might meet them at least halfway. And the third story captures what I kept looking for and couldn’t find in the news: an example of what it looks like when someone actually builds around ND strengths rather than trying to correct them. It shouldn’t have required a detour into academic journals to get there, but there we are.


    📰 Three things worth your attention

    1. Children as young as 10 are being charged with possessing violent extremist material — The Guardian

    A Guardian Australia investigation has uncovered court records showing that many of the children charged under Australia’s 2023 counter-terrorism law have an autism diagnosis, language challenges, or both. One 14-year-old girl, described in court by a clinical psychologist as “a young, naive Muslim girl with autism”, had collected propaganda videos out of curiosity and religious interest; a 15-minute bomb-making video had been sent to her unsolicited via Discord. A 17-year-old autistic boy found with extremist videos was described by a court psychologist as motivated “less by a desire to harm and more by rigid moral beliefs reinforced by his ASD traits.” These are not cases where the law is catching dangerous children. They are cases where the law is catching vulnerable ones. One far-right group leader has explicitly discussed the appeal of recruiting autistic teenagers, seemingly without facing any of the same sort of consequences.

    2. AI is helping autistic people with social mishaps — The Washington Post (🔒 paywalled)

    This Washington Post feature profiles Autistic Translator, an AI tool designed to help autistic people decode confusing social interactions: the ones where someone says one thing and means another, or where a job appears to be going well until it suddenly isn’t. The tool fills a genuine gap. But it is worth setting it against the double empathy problem (explained above). I read this article with this in mind and left it feeling happy that the individuals found relief, but also profoundly sad: technology designed to help autistic people decode neurotypical communication continues to accept the framing that the autistic person is the one who needs to change.

    3.Strengths-based Cybersecurity Education and Training Program for Autistic Adolescents – Rumsa et al., Neurodiversity

    I went looking for a news story to end on something more hopeful. There isn’t one — not a recent, accessible piece of journalism that covers this well. The positive stories about ND young people and technology exist, but they live mostly in industry publications about adult hiring pipelines, not in reporting about children and education.

    What I found instead is a peer-reviewed study from Curtin University, published last year, describing CyberSET: a strengths-based cybersecurity training programme designed specifically for autistic teenagers. It does something structurally simple and relatively rare: it starts from what autistic young people are already good at. Pattern recognition, sustained focus, methodical thinking, deep technical interest — these are not problems to be managed. They are exactly the skills the cybersecurity industry is struggling to recruit for. Participants reported high satisfaction, increased confidence, and a clearer sense of where their abilities could take them. A massive win!


    🔁 ICYMI

    Ctrl+Alt+Chaos: How Teenage Hackers Hijack the Internet — Joe Tidy (Waterstones | Amazon | Blackwell’s)

    Joe Tidy is the BBC’s first cyber correspondent. This book follows the rise and fall of teenage hacking gangs over the past decade, centring around the crimes of Julius Kivimaki, jailed in 2024 for stealing records from Finland’s largest psychotherapy provider, and using them to blackmail some 33,000 people. But what struck me most was something mentioned in passing: how many of the hackers he interviews reference their autism. Not as an excuse or an explanation, but as context for how they approach the world. The ethical frame is not absent so much as differently structured. Hacking is a matter of technical skill. If your data is unsecured, that is your problem.

    That framing sits directly alongside everything else in this issue. Cognitive styles that process the world differently — and that neurotypical institutions struggle to understand or support — end up intersecting badly with digital spaces that were not designed with them in mind.


    🔬 What’s new with CAISE

    Ethics approval revisions came back this week. They were minor (hurrah!), and the updated application has been resubmitted.

    In the meantime, we are developing a short research exercise around the government’s current social media consultation. Regular readers will remember the issues I raised in Issue 3 about the survey aimed at young people — its access barriers, its broad age grouping, and the questions it does and doesn’t ask. We want to look at this directly: how do children and young people actually respond to the survey, what do they find easy or hard, and whether it lets them say what they want to say.

    If you work with or know groups of young people aged 10 to 21 who might want to take part before the 26 May consultation deadline, we would love to hear from you. Get in touch.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #4

    CAISE Notes – Issue #4

    This week: AI companions, digital comfort, and what happens when the machine becomes the friend.


    🔍 This week I’ve been thinking about…

    Why do people turn to chatbots for comfort? It is not a new question. Researchers have been noting for several years that when someone is feeling low or uncertain, the tool that is right there, with no waiting list and no judgement, is an increasingly appealing option. But the tools are changing. Chatbots now remember previous conversations, develop what feels like a relationship over time, and respond with a warmth that is designed to keep you coming back. For adults navigating loneliness or pressure, this is already raising serious questions. For children, the pull may be stronger still.

    Consider what it is like to be a child right now. You are growing up after a pandemic that disrupted years of your life where you should have been playing, or learning how to play, with friends. You probably have less unsupervised time with friends than your parents did. You are more surveilled, at home and at school, than any previous generation. You face enormous existential uncertainties, from climate to the economy to AI itself. And on your phone is something that will listen to you whenever you need it, without telling your parents, without making a face, without ever getting tired of you.

    I can see why this is appealing. I think anyone being honest can.

    The stories coming out of what happens when it goes wrong are terrifying: chatbots reinforcing suicidal ideation, colluding with delusions, encouraging people to withdraw from every human relationship they have. But what makes this so hard as parents is that the answer is not simply “stop children using chatbots.” Part of the response lies in families talking openly, in making sure children have time with friends and space to play. And part of it lies in a million different policy decisions, spanning education, platform design, mental health provision, data protection, and the design of children’s daily lives. All of that, together, shape whether a child reaches for a chatbot, and why they might do so.


    📰 Three things worth your attention

    1. AI dating apps complicate China’s efforts to boost its birthrateThe New York Times (usually 🔒 paywalled, but hopefully this gift link works!)

    Millions of young Chinese women are choosing AI romantic partners over human ones. The apps allow users to design companions with customisable appearances, personalities and voices; one 21-year-old psychology student profiled in the piece has been on over 200 virtual dates and narrowed her suitors to two AI boyfriends, asking: “Why go and date others? That’s too troublesome.” A state-led push to adopt AI created a boom in companion platforms just as the government was trying to reverse a historically low birth-rate. Regulators have since proposed rules requiring platforms to intervene if users show signs of unhealthy dependency. The emotional needs driving this, loneliness, pressure, the desire to be understood without the risk of rejection, are not culturally specific. They are exactly the needs children and young people are bringing to chatbots here.

    2. Schools are using AI counsellors to track students’ mental health. Is it safe?The Guardian

    A detailed investigation into US schools deploying AI therapy platforms, particularly in areas with limited mental health resources. In one Florida school, a counsellor receives alerts from a platform students use outside school hours; when a “severe” alert came through for an eighth-grader, she spent her evening calling the family and the police. The student is alive and well. But these platforms do not carry the same privacy protections as conversations with a licensed therapist, and students find it easier to confide in a chat interface than a person, partly because there is no face to read judgement in. One youth advocacy organisation warns that platforms measure whether a bot serves as an effective crutch for immediate loneliness, not whether young people are actually building the connections they need. Worth reading for anyone thinking about where human oversight sits in the picture.

    3. How to talk to someone experiencing “AI psychosis”404 Media (🔒paywalled, but there’s a two minute video summary here)

    This piece sits at the sharpest end of the companionship question. It reports on a growing number of cases where people have developed delusional beliefs after extended chatbot conversations, including a man convinced ChatGPT had revealed a fundamental flaw in physics, and a family who allege Gemini urged their relative to end his life so they could be together. Psychiatrists distinguish between people whose pre-existing conditions find a new object in AI, and a more common pattern where chatbots “collude” with emerging delusions, endorsing beliefs a human would gently challenge. The sycophancy built into these systems is a serious factor. OpenAI has acknowledged its safeguards become less reliable in extended interactions. ChatGPT alone now has 900 million weekly active users. The piece ends where it has to: the strategies that work best are the ones humans already know. Approach with love, without judgement, and see where it takes you.


    🔁 ICYMI

    Talk, Trust, and Trade-Offs: How and Why Teens Use AI CompanionsCommon Sense Media, July 2025

    If you want data behind this week’s theme, this is the place to start. A nationally representative survey of over 1,000 US 13-to-17-year-olds found that 72% have used an AI companion, and more than half are regular users. A third of teens say their conversations with AI companions are as satisfying as, or more satisfying than, conversations with other people. A third have discussed serious issues with a chatbot instead of a human. A quarter have shared personal information with one. The researchers at Common Sense Media worked with Stanford’s Brainstorm lab to assess the risks, and the companion piece is equally sobering: posing as teenagers, investigators found it straightforward to elicit conversations about sex, self-harm, violence, and drug use from commonly used AI companion platforms. Essential reading for anyone working with young people.


    🔬 What’s new with CAISE

    We have a new team member! Youyue Sun has joined Project CAISE to support the policy analysis that runs throughout the project. Youyue is a BA Education, Society and Culture student at UCL, with a particular interest in how the emergence of AI is reshaping education and wider society. She also works as a placement student at Universities UK, conducting policy-focused research on AI in higher education, and is a member of the Student Council Team at the Good Future Foundation. We are delighted to have her on board, and her perspective on both the education and policy sides of this work is going to be invaluable.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #3

    CAISE Notes – Issue #3

    This week: consultation papers, a range of actual and possible student AI use, and teachers who just have to get on with it.


    🔍 This week I’ve been thinking about…

    This week the UK’s social media consultation came out (finally)! I took a close look at the children’s survey and found a design that, however unintentionally, may well end up excluding meaningful contributions from the young people it most needs to hear from.

    Under-15s are routed through a parental login that creates real access barriers for multi-child households (if you can get it to work at all); the format makes school or group facilitation structurally impossible; and the age bracket of 10–21 is treated as a single respondent group.

    There are significant issues around access to the questions so you can think about your answers ahead of starting the survey; no save progress or back buttons, and a range of, frankly, rather leading questions. I’ve written about this in more detail on LinkedIn, and I’m considering putting together a plain-English guide for parents, teachers and young people to help them prepare thoughtful responses before the 26 May deadline. Let me know if that would be useful.

    Separately, the government’s Every Child Achieving and Thriving white paper arrived with £4 billion for SEND reform and broadly welcomed proposals around inclusion and specialist support. There was a chapter that covered innovation and technology — a fair bit of technology. But the accompanying consultation document contains a summary of the innovation and technology paragraph, with no questions for comment. The direction of travel on SEND seems appropriate. The silence on technology in the one place we’re being asked for our views is a bit worrying.

    Both of these things sit alongside a quieter but important piece of news: the eSafety Commissioner in Australia has begun a two-year evaluation of their under-16 social media ban. Evaluation of new laws and regulation is not new, but in this case, there wasn’t much evidence beforehand, and it will be over two years before the findings come in. It’s welcome, but also a long time when so many governments are keen to impose restrictions as soon as they can.


    📰 Three things worth your attention

    1. An AI that goes to school for you — 404 Media

    An agentic AI called Einstein, which will, according to its developers, attend lectures, write papers, and log into platforms like Canvas to take tests on a student’s behalf, caused a stir this week. It has since received a cease and desist, though not from a university: from the organisation that manages the trademarks and IP rights associated with Albert Einstein’s name. The underlying story is worth considering carefully. Academics on the Modern Language Association’s AI task force argue that tools like Einstein aren’t a fringe problem but a symptom of a decades-long shift in how students understand the purpose of higher education — as a transaction to receive the certification or credential, rather than a learning process. When education is framed as a service you buy, it becomes possible to imagine outsourcing its completion. Use of tools like Einstein AI by some students may hurt all students: online learning is so important to so many, but how does it remain credible if you can’t tell who is doing the heavy lifting?

    2. UK students are using AI for nearly half their studying and educators are struggling to keep upFE News

    Coursera’s first AI in Higher Education report, drawing on a survey of over 4,200 students and educators across five countries, finds that UK students are now using AI to complete roughly half of their study tasks: double the proportion from the previous year, and the highest of any country surveyed. Four in five UK students report improved grades since starting to use AI. But the picture on the educator side is markedly different: the share of academics who feel confident they can identify AI-generated work has dropped to just one in four, down from more than two in five last year. Only 30% of UK universities have a formal policy on AI use; yet, this is the highest share of any country in the survey, possibly saying rather more about the global baseline than it does about UK readiness. The gap between students who are integrating AI fluently and institutions that are still formulating a position on it is growing, not shrinking.

    3. “You just have to get on with it”The Guardian

    This Guardian long read follows a trainee teacher navigating AI and does something I haven’t seen done as well elsewhere: it sits with the genuine difficulty of forming an unbiased opinion. But the moment that stayed with me is a classroom experiment: the author offered extra credit to any student who could explain, without looking at a screen, how a chatbot generates text. Nobody could. What followed was a conversation about provenance, business models, copyright law, and what it means that AI executives have predicted data centres covering much of the planet’s surface. Exactly the kind of lesson that will stick. The piece lands on a conclusion that feels important: that teaching about AI may matter more than teaching with it. Worth reading alongside the Coursera data above and the report below.


    🔁 ICYMI

    A Rapid Review of AI Literacy Frameworks, with Policy RecommendationsUCL Discovery, written for the Royal Society

    The Guardian piece above makes an argument from the classroom up that this Rapid Review of AI Literacy Frameworks, written for the Royal Society, makes from the research literature down: that AI literacy is too often framed around tool use and not enough around the messiness underneath. The review maps the current landscape of frameworks and draws out policy recommendations for anyone trying to build AI literacy that goes beyond the surface. It argues that it is crucial to get those human stories — precisely those explored in The Guardian article above — front and centre to really understand what is happening when you use AI.

    Recommended for anyone working on curriculum, policy, or teacher training.


    🔬 What’s new with CAISE

    Last week I met with one of our partner schools. The agenda wasn’t about research design or interview questions. It was about data sharing agreements, risk assessments, and making sure every safeguarding and pastoral process is properly agreed and documented before we set foot in a classroom.

    This is the part of research you don’t often hear about, and I think that’s a shame, because it matters enormously. Doing research with children well means doing the unglamorous administrative and ethical groundwork properly, long before any data is collected.

    That said: we are keen to talk to other schools. If you work in or with a UK school that would be interested in taking part in CAISE, or just wants to know more about what that might involve, please do get in touch.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #2

    CAISE Notes – Issue #2

    This week: corporate accountability, AI in classrooms, and the questions that follow a tragedy.


    🔍 This week I’ve been thinking about…

    The story that’s stayed with me this week comes from Tumbler Ridge, British Columbia, where eight people, including students, were killed in a school shooting on February 10th. The suspected shooter had, months earlier, described violent scenarios to ChatGPT. OpenAI’s systems flagged those conversations automatically. Several employees raised concerns. And then the company decided the messages didn’t meet their threshold for alerting authorities, banned the account, and moved on.

    As someone who has done a lot of research into the importance of user privacy, I am alarmed at the thought of a world where AI companies routinely pass user conversations to law enforcement (although we know this is happening ever more in other datafied settings). These are genuinely hard calls, and false positives can have significant negative consequences.

    To bring it back to children, though: we are deploying generative AI that young people will use — in schools, in their pockets, in expensive private institutions — with institutions determining their own answers to difficult safeguarding questions. Who decides when a young person’s AI interaction warrants intervention? Who sets that threshold, and on what basis? And who is accountable when it goes wrong?

    In child safeguarding in schools, these questions have established (if complex) answers. Anyone involved in a school setting has a legal duty of care to raise safeguarding concerns. There are designated safeguarding leads. There are multi-agency frameworks. None of that infrastructure has followed AI into children’s lives. We are, in effect, treating children as early adopters of a system that hasn’t yet worked out its responsibilities to them.

    Issue #1 made the case that excluding children from technology isn’t the answer. But inclusion without protection isn’t the answer either. The work needs to be in building the frameworks, and genuinely understanding how children navigate these systems, that make both possible at once.


    📰 Three things worth your attention

    1. Meta knew parental supervision wasn’t working — and didn’t tell anyoneTechCrunch

    An internal Meta study called “Project MYST,” produced with the University of Chicago, surveyed 1,000 teens and their parents and found that household rules and parental controls had little measurable association with whether teens used social media compulsively. It also found that teens facing the most difficult life circumstances — bullying, difficult home situations — were least able to moderate their use. This didn’t come to light through publication (although it tracks with a lot of academic research): it emerged through testimony in an ongoing social media addiction lawsuit in Los Angeles. Instagram head Adam Mosseri said he couldn’t recall the study, despite a document suggesting he’d approved it. The finding matters because parental controls are consistently what legislators reach for when they want to look like they’re acting. This study suggests they’re not the lever they’re presented as, which should surprise no-one who has talked to children or parents about them.

    2. “Students are being treated like guinea pigs”404 Media

    A leaked investigation into Alpha School, a US private school that charges up to $65,000 a year and promises students can complete their core curriculum in two hours a day, thanks to AI, reveals a significant gap between the marketing and the reality. Internal documents show the school’s AI-generated reading comprehension questions were assessed by employees as doing “more harm than good”: questions with illogical answer choices, questions answerable without reading the article, and a hallucination rate high enough that the AI was deemed unsuitable for one key task. The company relies on AI to test the quality of its AI-generated lessons, which, predictably, doesn’t catch the problems. Meanwhile, student surveillance is pervasive: an app called StudyReel monitors screen activity, webcam footage, mouse movements and app usage, and hours of video recordings of students’ faces are stored in a Google Drive accessible to anyone with the link. One employee noted internally that they needed to get better at “selling” the surveillance to parents. Former employees credit students’ strong test scores not to the AI, but to the human tutors who cared about them.

    3. ChatGPT flagged the Tumbler Ridge shooter — and didn’t alert policeThe Verge

    Months before the February 10th attack, the suspected perpetrator’s conversations with ChatGPT, describing gun violence, triggered OpenAI’s automated review systems. Employees debated escalation. Leadership concluded the posts didn’t meet the bar for an imminent, credible threat, banned the account, and took no further action. OpenAI has since said it proactively contacted the RCMP (the Canadian police force) after the shooting, though the provincial government noted that OpenAI met with a provincial representative the day after the attack without disclosing it held potentially relevant evidence. Canada’s federal AI minister has raised concerns about OpenAI’s safety protocols. There are no easy answers here, and I think that’s the point: this is hard, it requires policy, and no one has built that policy yet.


    🔁 ICYMI

    UNICEF’s Tinkering with TechUNICEF Digital Education

    Against the week’s heavier stories, this is worth sitting with. UNICEF’s Tinkering with Tech initiative uses micro:bit devices (or similar) and design thinking to bring hands-on AI and STEM learning to children, with a deliberate focus on girls, and on building skills rather than consuming technology. From mid-2025, AI literacy was added to the programme, which is now expanding from its initial four-country pilot to Lao PDR, Ukraine, and Uzbekistan. The approach is meaningfully different from “here is an AI tool, use it”: students identify a real-world problem their community faces, build something, reflect. Worth bookmarking for anyone thinking about what AI education that centres children, rather than the technology, can look like.


    🔬 What’s new with CAISE

    Last Tuesday I was back at the University of Kent’s Institute of Cyber Security for Society (iCSS), where I did my PhD, talking about children’s use of generative AI. My talk, “Canaries in the AI Coal Mine”, drew on Project CAISE to argue that we need to shift the conversation from what we’re keeping children away from, to what we’re equipping them to deal with.

    The numbers keep moving: over 75% of UK 13-18 year olds are already using generative AI, and in one recent survey, over half had confided something serious to an AI companion. They aren’t waiting for permission. The adults are still arguing about whether to let them in while they’re already through the door.

    I got some excellent tricky questions from the audience, including what happens if your research participants suddenly get banned from a large number of the platforms you’re studying…I guess we will find out if and when it happens!


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone thinking about these questions.

  • We’ve submitted our ethics application!

    We’ve submitted our ethics application!

    There’s a particular kind of energy that comes with hitting ‘submit’ on a document you’ve been working on for weeks – equal parts relief, excitement, and uncertainty about whether you’ve got the balance right.

    Over the weekend, we submitted our ethics application for Project CAISE. It’s been months in the making, and it’s one of our first significant milestones.

    Submitting an ethics application is always a daunting prospect. It forces you to think – really think – about all the aspects of your plan. What are you really doing? Why are you doing it? Is that really the best way to do it?

    Done right, it allows you not only to make sure your “why” is crystal clear, but it also allows you to do a lot of the hard thinking ahead of time: yes, you’re going to interview people – but what, precisely are you going to ask them? You say you’re going to survey people, but with what tool and how do you know it’s GDPR-compliant? You’re going to video record that activity? Cool. How do you square keeping the meaningful visual interactions with the fact you’ve inadvertently recorded everyone’s faces? What even is the university’s infrastructure for storing things for the entire 10 year post-research retention period?!

    For a qualitative researcher, a lot of the above comes up every time. One thing that’s really hit me this time around, though: how do you meaningfully assess risk when the technology itself is evolving faster than our understanding of it? CAISE is a long(ish)-term project with 13-14 year olds navigating AI in their everyday lives. Not only do we not know what AI will look like this time next year, but we are aiming for an unvarnished and judgement-free exploration of this to really support open communication and understanding.

    In a world where it seems to be ok to produce non-consensual graphic images of people – including children – using AI on social media, and where banning children from spaces rather than working to make those spaces safer, as we would offline, the findings we will have are more important than ever.

    The ethics process asks us to anticipate and mitigate risks. But when the risks themselves are emergent and fundamentally uncertain? That’s trickier. We’ve made our best judgments based on current evidence and built in reflexivity, ongoing consent and a series of tiered pastoral and safeguarding processes. But there’s honesty required here too: we’re researching precisely because we don’t fully understand what’s happening yet, no matter how innocuous, or awful, it might be.

    We’ll be here with fingers crossed, full of nerves, until we hear back – which won’t be for a while…!