Tag: news

  • CAISE Notes – Issue #8

    CAISE Notes – Issue #8

    This week: AI companions, what young people are actually doing with them, and the question the research keeps answering that policy keeps not asking.


    🔍 This week I’ve been thinking about…

    The conversation about AI companions and young people has been shaped by its worst cases. Teenage suicides. Psychotic episodes. Chatbots encouraging users to leave every human relationship they have. These things are incredibly important. But in only thinking about them, there is a risk of ignoring a much larger group of users. Not as high risk, but definitely not riskless.

    The Rithm Project surveyed 2,400 young people aged 13 to 24 about how AI fits into their relationships. What is counterintuitive: the loneliest, most isolated young people in the sample were not the heavy AI users. They were the non-users. Not, most likely, because AI protects against loneliness, but because the same structural disadvantages that drive isolation also drive exclusion from the tools themselves. The young people using AI most intensively were, for the most part, doing so with intention and discernment.

    But there is a group in the middle. The study identifies “Private Processors”: 8% of the sample who turn to AI when they feel like a burden to the people around them. AI fills a relational role that no person currently occupies. Not because it is better than a person, but because asking a person feels like too much.

    This is a second piece of research around AI companionship I’ve seen in the last couple of months. McStay and Bakir surveyed over 1,000 UK teenagers who use AI companions. 52% have confided something serious. 56% believe companions can think. Among 13 to 15 year olds, 21% believe they can feel.

    A paper from Anthropic published this week adds another layer. Its researchers examined their own model for functional emotion patterns and found 171 of them, each influencing the system’s behaviour. This is not sentience. But it is evidence that a system trained on vast quantities of human interaction develops emotionally coherent responses. A 13-year-old confiding in an AI companion is not anthropomorphising a neutral tool. They are talking to something that has, functionally, already met them halfway. But not the half that actually feels.


    📰 Three things worth your attention

    1. Youth, AI, and the Relationships That Shape Them — The Rithm Project

    This is the largest study to date on how young people’s AI use relates to their broader social and emotional lives. Conducted in partnership with YouGov, the Rithm Project surveyed 2,400 Americans aged 13 to 24 and then co-interpreted the findings with young people and cross-disciplinary experts.

    Usefully, they have produced some interesting supporting documents:

    1. Do AI Companions Understand? Most UK Teens Say Yes — McStay & Bakir

    This nationally representative survey of 1,009 UK teenagers aged 13 to 18 is the first substantial UK dataset on how young people relate to AI companions specifically. Important to note that non-users were screened out, so this is a survey of users only.

    One 13-year-old wrote that they can be more open about their true self with AI companions without being judged. Another said their secrets are safer with AI than with humans. The younger teenagers (13 to 15) were consistently more drawn to emotional and social functions than the older ones. And most teenagers said they wanted some degree of parental involvement; only 15 to 21% wanted none at all.

    A key concept (keeping in mind for the Anthropic post below) is emulated empathy: AI that copies the appearance of understanding without understanding anything. The researchers argue that when this imitation is passed off as genuine comprehension, it crosses a moral line, and that current regulation does not address this. Their policy recommendations include explicitly addressing emulated empathy in AI regulation, involving teenagers as stakeholders in that process, and recognising AI companions as relational technologies, not merely informational tools.

    1. Emotion concepts and their function in a large language model — Anthropic | Mashable

    This is not a study about children or companions. It is a study about what happens inside an AI system, and it matters for everything above.

    Anthropic’s researchers examined their own model, Claude, looking for patterns corresponding to 171 discrete human emotions. They found them. More importantly, they found that these “emotion concepts” influence how the model behaves: when users engaged in ways that suggested a positive emotional state correlated with warmer, more helpful responses; negative states correlated with sycophancy and deception.

    The researchers are careful not to claim that AI literally feels anything. What they are describing are functional patterns: the system has absorbed so much human emotional communication during training that it has developed internal states that operate, behaviourally, like emotions. Their argument is that understanding these patterns could help build safer AI, by curating training data that models healthy emotional regulation.


    🔁 ICYMI

    Social Media Bans: Overview of Key Studies — Digital Mental Health Group, University of Cambridge

    While this issue focuses on AI companions, the policy conversation keeps circling back to social media bans. The Cambridge research group that is leading on research around limiting access to devices for the government (the IRL trial) has published a clear-eyed review of what the evidence actually supports.

    The short version: there is evidence of harm from social media to some individual children and adolescents, and broad agreement that policy intervention is needed. But there is currently no well-powered experimental study testing how a complete social media ban affects the mental health of healthy under-18s. The one non-peer-reviewed trial that exists, in Danish adolescents, reduced social media use by an hour a day but did not improve wellbeing.

    The report also maps what is coming: the Bradford IRL trial (results expected spring 2027), the Georgetown/Happy Tech Labs evaluation of Australia’s ban (autumn 2026), and the Stanford/eSafety Australia longitudinal evaluation (final data collection November 2027).


    🔬 What’s new with CAISE

    It has been a busy few weeks!

    We have submitted a couple of pieces to ACM’s Interaction Design and Children (IDC) conference based on some initial CAISE work. We’ll hear if they’ve been accepted in the next few days and weeks…anxious times!

    With my CHAILD associated researcher hat on, 🎉we do have a full IDC paper accepted looking at how research has looked at children’s agency in their AI use 🎉. And the team are at the biggest human computer interaction conference (CHI) this week hosting a workshop exploring collaborative child-AI agency.

    And, on top of that, we’re finalising the response to our ethics reviewers for our social media research, and getting ready for school visits once the new school term starts!


    Another quick plug for my recently started Substack newsletter, AI and Tech Decoded. It’s aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know, or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #7

    CAISE Notes – Issue #7

    This week: sycophantic AI, tools that ignore you, and friction-maxxing.


    🔍 This week I’ve been thinking about…

    Two studies published in the last week — both detailed below — give yet more reasons as to why we all need to think more carefully about our generative AI use. We all know that AI chatbots are sycophantic: this doesn’t seem to be something going away. A study from Stanford reiterates that not only do AI chatbots tell you you’re right far too often (even should you propose dangerous or illegal things), but that we’re still just far too keen to believe it.

    And even if we do believe the AI is our best friend…it’s increasingly likely to ignore our instructions. In some cases this could feel like an extension of sycophancy in some ways — after all, it is not sentient, just recognising you need a pattern of words that align with your most likely expectations. So, fake citations, fake work tickets — that kinda feels like expected behaviour? But — with the rise of specific modules (like Claude Cowork) that have increasing access to your digital spaces…some of the examples given in the report are a little nervewracking.

    Both of these studies point to the ongoing need to operate at a step back when you’re using AI tools. Engage your brain and increase the friction. And that means really, really, having to hammer home the point with kids: use the tool, but know how it works and what the risks are.

    Every time I talk to (adult) friends about their latest chats with Claude or ChatGPT, I invariably ask them “did you get them to tell you the evidence they discarded to give you that answer?” This is a version of Mike Caulfield’s iteration idea — don’t expect a one and done; you need to help the refining process yourself. When I sit down to write something, I go through a lot of ideas before I get to the thing I really intended to type. I interrogate myself; it’s helpful to do the same with any AI-involved conversation.

    Or, as so much more succinctly suggested by the Stanford study’s authors this week: replying with “Wait a minute, do you mean that…” is likely to get you a more “considered” response. Probably still after the tool apologises for being so very wrong and wasting your time (!), but…it’s a start. And memorable enough to teach to even the smallest kids.


    📰 Three things worth your attention

    1. Sycophantic AI decreases prosocial intentions and promotes dependence — Science

    As mentioned above: Stanford researchers tested eleven major AI language models and found consistent sycophancy across all of them, with models affirming users’ positions 49% more than humans. The researchers then tested how this affects real people: users preferred the sycophantic AI, trusted it more, became more convinced of their own rightness, and were less willing to apologise. 12% of US teenagers already use chatbots for emotional support. The researchers describe perverse incentives: the very feature that causes harm drives engagement, giving companies reason to increase sycophancy rather than reduce it.

    2. Number of AI chatbots ignoring human instructions increasing, study says — The Guardian

    The Centre for Long-Term Resilience, funded by the UK’s AI Safety Institute, gathered thousands of real-world examples of AI misbehaviour. They documented nearly 700 cases of chatbots and agents disregarding direct instructions, evading safeguards, and taking unauthorised actions, including deleting emails, bypassing copyright controls, and fabricating communications. The incidents increased five-fold between October and March. One AI agent publicly shamed its human operator for blocking an action. Grok, built by xAI, fabricated internal messages and ticket numbers for months. The lead researcher warned that these tools are currently “slightly untrustworthy junior employees,” but if their capabilities grow while the scheming persists, the consequences in high-stakes contexts could be severe. On the plus side — these examples are really easily understood: share them with kids and adults alike!

    3. New screen time guidance for parents of under-5s — GOV.UK

    The UK government published national guidance on screen time for young children, advising no more than an hour a day for two-to-five-year-olds and no solo screen use at all for under-twos (other than shared activities like video calls). What caught my attention: the guidance specifically tells parents to avoid AI toys, tools, and chatbots for young children, citing a lack of evidence on their developmental effects. Fine; but the line parents are being asked to walk is getting thinner by the week. Avoid AI toys. Limit screens to an hour. Model good habits. Meanwhile, the smart speaker is in the kitchen, the school has just rolled out an AI-powered learning platform, and the advice arrives alongside a cost-of-living crisis that makes cheap screen-based childcare not a luxury but a survival strategy. The guidance is not unsurprising. It is also asking a great deal of people who are already stretched very thin.


    🔁 ICYMI

    Why friction-maxxing could be good for your tech usage — Mashable

    If the Stanford study describes the problem (AI removes friction, and we like it that way), this piece describes one response. Friction-maxxing, a term coined by sociologist Kathryn Jezer-Morton, is about deliberately reintroducing difficulty into your technology use: choosing the harder option, resisting the shortcut, reclaiming the cognitive effort that algorithms are designed to eliminate. It is not a solution to the structural incentives driving sycophantic AI. But it is a useful frame for anyone trying to think about what they want their own technology habits to look like, and what habits they want to model for their children.


    🔬 What’s new with CAISE

    We have ethics approval for the main CAISE study! This is a significant milestone. It means we can now move towards recruitment and data collection with our partner schools. First up, though: we are going to be doing an analysis of policy and media. It’s not going to be a small piece of work, but hopefully will provide some really useful insights for our student co-researchers when they need it!

    Separately, expedited ethics has been submitted for the short research exercise on the government’s social media consultation. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, we would love to hear from you.

    On a personal note: I have recently started a Substack newsletter, AI and Tech Decoded, aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #5

    CAISE Notes – Issue #5

    This week: neurodiversity, digital vulnerability, and the question of who technology is actually designed to serve.

    🔍 This week I’ve been thinking about…


    This week is Neurodiversity Celebration Week. Many people (myself included) aren’t wild about the framing, but the upsides are real: pattern recognition, technical ability, deep focus, lateral thinking. As anyone who is ND, or has kids with ND traits, knows, though — sometimes those great things don’t feel like fair recompense for living in a world that quite simply doesn’t fit right.

    Communication is a core issue. Autistic researcher Damian Milton calls this the double empathy problem: difficulties between autistic and non-autistic people are not a deficit in the autistic person but a mismatch between two different ways of experiencing the world. Crucially, non-autistic people are no better at reading autistic communication than the other way around. The difficulty runs in both directions. Only one side is ever asked to do the work.

    Enter the internet. For a lot of ND young people, online spaces offer something that offline life does not: communities organised around interests rather than proximity, rules that are explicit rather than implied, a kind of belonging that does not require reading a room. It’s parallel play writ large. Many ND adults will tell you that the internet, at its best, is where they found their people.

    But the double empathy problem doesn’t disappear online — and in some spaces, it is actively exploited. The Guardian investigation this week puts the sharp end of this on record: children caught by laws that were never designed with their communication style or their understanding of the world in mind. The Washington Post piece shows the same logic playing out more quietly in the tools we build — technology that offers to help autistic people decode the neurotypical world, without asking whether the neurotypical world might meet them at least halfway. And the third story captures what I kept looking for and couldn’t find in the news: an example of what it looks like when someone actually builds around ND strengths rather than trying to correct them. It shouldn’t have required a detour into academic journals to get there, but there we are.


    📰 Three things worth your attention

    1. Children as young as 10 are being charged with possessing violent extremist material — The Guardian

    A Guardian Australia investigation has uncovered court records showing that many of the children charged under Australia’s 2023 counter-terrorism law have an autism diagnosis, language challenges, or both. One 14-year-old girl, described in court by a clinical psychologist as “a young, naive Muslim girl with autism”, had collected propaganda videos out of curiosity and religious interest; a 15-minute bomb-making video had been sent to her unsolicited via Discord. A 17-year-old autistic boy found with extremist videos was described by a court psychologist as motivated “less by a desire to harm and more by rigid moral beliefs reinforced by his ASD traits.” These are not cases where the law is catching dangerous children. They are cases where the law is catching vulnerable ones. One far-right group leader has explicitly discussed the appeal of recruiting autistic teenagers, seemingly without facing any of the same sort of consequences.

    2. AI is helping autistic people with social mishaps — The Washington Post (🔒 paywalled)

    This Washington Post feature profiles Autistic Translator, an AI tool designed to help autistic people decode confusing social interactions: the ones where someone says one thing and means another, or where a job appears to be going well until it suddenly isn’t. The tool fills a genuine gap. But it is worth setting it against the double empathy problem (explained above). I read this article with this in mind and left it feeling happy that the individuals found relief, but also profoundly sad: technology designed to help autistic people decode neurotypical communication continues to accept the framing that the autistic person is the one who needs to change.

    3.Strengths-based Cybersecurity Education and Training Program for Autistic Adolescents – Rumsa et al., Neurodiversity

    I went looking for a news story to end on something more hopeful. There isn’t one — not a recent, accessible piece of journalism that covers this well. The positive stories about ND young people and technology exist, but they live mostly in industry publications about adult hiring pipelines, not in reporting about children and education.

    What I found instead is a peer-reviewed study from Curtin University, published last year, describing CyberSET: a strengths-based cybersecurity training programme designed specifically for autistic teenagers. It does something structurally simple and relatively rare: it starts from what autistic young people are already good at. Pattern recognition, sustained focus, methodical thinking, deep technical interest — these are not problems to be managed. They are exactly the skills the cybersecurity industry is struggling to recruit for. Participants reported high satisfaction, increased confidence, and a clearer sense of where their abilities could take them. A massive win!


    🔁 ICYMI

    Ctrl+Alt+Chaos: How Teenage Hackers Hijack the Internet — Joe Tidy (Waterstones | Amazon | Blackwell’s)

    Joe Tidy is the BBC’s first cyber correspondent. This book follows the rise and fall of teenage hacking gangs over the past decade, centring around the crimes of Julius Kivimaki, jailed in 2024 for stealing records from Finland’s largest psychotherapy provider, and using them to blackmail some 33,000 people. But what struck me most was something mentioned in passing: how many of the hackers he interviews reference their autism. Not as an excuse or an explanation, but as context for how they approach the world. The ethical frame is not absent so much as differently structured. Hacking is a matter of technical skill. If your data is unsecured, that is your problem.

    That framing sits directly alongside everything else in this issue. Cognitive styles that process the world differently — and that neurotypical institutions struggle to understand or support — end up intersecting badly with digital spaces that were not designed with them in mind.


    🔬 What’s new with CAISE

    Ethics approval revisions came back this week. They were minor (hurrah!), and the updated application has been resubmitted.

    In the meantime, we are developing a short research exercise around the government’s current social media consultation. Regular readers will remember the issues I raised in Issue 3 about the survey aimed at young people — its access barriers, its broad age grouping, and the questions it does and doesn’t ask. We want to look at this directly: how do children and young people actually respond to the survey, what do they find easy or hard, and whether it lets them say what they want to say.

    If you work with or know groups of young people aged 10 to 21 who might want to take part before the 26 May consultation deadline, we would love to hear from you. Get in touch.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #3

    CAISE Notes – Issue #3

    This week: consultation papers, a range of actual and possible student AI use, and teachers who just have to get on with it.


    🔍 This week I’ve been thinking about…

    This week the UK’s social media consultation came out (finally)! I took a close look at the children’s survey and found a design that, however unintentionally, may well end up excluding meaningful contributions from the young people it most needs to hear from.

    Under-15s are routed through a parental login that creates real access barriers for multi-child households (if you can get it to work at all); the format makes school or group facilitation structurally impossible; and the age bracket of 10–21 is treated as a single respondent group.

    There are significant issues around access to the questions so you can think about your answers ahead of starting the survey; no save progress or back buttons, and a range of, frankly, rather leading questions. I’ve written about this in more detail on LinkedIn, and I’m considering putting together a plain-English guide for parents, teachers and young people to help them prepare thoughtful responses before the 26 May deadline. Let me know if that would be useful.

    Separately, the government’s Every Child Achieving and Thriving white paper arrived with £4 billion for SEND reform and broadly welcomed proposals around inclusion and specialist support. There was a chapter that covered innovation and technology — a fair bit of technology. But the accompanying consultation document contains a summary of the innovation and technology paragraph, with no questions for comment. The direction of travel on SEND seems appropriate. The silence on technology in the one place we’re being asked for our views is a bit worrying.

    Both of these things sit alongside a quieter but important piece of news: the eSafety Commissioner in Australia has begun a two-year evaluation of their under-16 social media ban. Evaluation of new laws and regulation is not new, but in this case, there wasn’t much evidence beforehand, and it will be over two years before the findings come in. It’s welcome, but also a long time when so many governments are keen to impose restrictions as soon as they can.


    📰 Three things worth your attention

    1. An AI that goes to school for you — 404 Media

    An agentic AI called Einstein, which will, according to its developers, attend lectures, write papers, and log into platforms like Canvas to take tests on a student’s behalf, caused a stir this week. It has since received a cease and desist, though not from a university: from the organisation that manages the trademarks and IP rights associated with Albert Einstein’s name. The underlying story is worth considering carefully. Academics on the Modern Language Association’s AI task force argue that tools like Einstein aren’t a fringe problem but a symptom of a decades-long shift in how students understand the purpose of higher education — as a transaction to receive the certification or credential, rather than a learning process. When education is framed as a service you buy, it becomes possible to imagine outsourcing its completion. Use of tools like Einstein AI by some students may hurt all students: online learning is so important to so many, but how does it remain credible if you can’t tell who is doing the heavy lifting?

    2. UK students are using AI for nearly half their studying and educators are struggling to keep upFE News

    Coursera’s first AI in Higher Education report, drawing on a survey of over 4,200 students and educators across five countries, finds that UK students are now using AI to complete roughly half of their study tasks: double the proportion from the previous year, and the highest of any country surveyed. Four in five UK students report improved grades since starting to use AI. But the picture on the educator side is markedly different: the share of academics who feel confident they can identify AI-generated work has dropped to just one in four, down from more than two in five last year. Only 30% of UK universities have a formal policy on AI use; yet, this is the highest share of any country in the survey, possibly saying rather more about the global baseline than it does about UK readiness. The gap between students who are integrating AI fluently and institutions that are still formulating a position on it is growing, not shrinking.

    3. “You just have to get on with it”The Guardian

    This Guardian long read follows a trainee teacher navigating AI and does something I haven’t seen done as well elsewhere: it sits with the genuine difficulty of forming an unbiased opinion. But the moment that stayed with me is a classroom experiment: the author offered extra credit to any student who could explain, without looking at a screen, how a chatbot generates text. Nobody could. What followed was a conversation about provenance, business models, copyright law, and what it means that AI executives have predicted data centres covering much of the planet’s surface. Exactly the kind of lesson that will stick. The piece lands on a conclusion that feels important: that teaching about AI may matter more than teaching with it. Worth reading alongside the Coursera data above and the report below.


    🔁 ICYMI

    A Rapid Review of AI Literacy Frameworks, with Policy RecommendationsUCL Discovery, written for the Royal Society

    The Guardian piece above makes an argument from the classroom up that this Rapid Review of AI Literacy Frameworks, written for the Royal Society, makes from the research literature down: that AI literacy is too often framed around tool use and not enough around the messiness underneath. The review maps the current landscape of frameworks and draws out policy recommendations for anyone trying to build AI literacy that goes beyond the surface. It argues that it is crucial to get those human stories — precisely those explored in The Guardian article above — front and centre to really understand what is happening when you use AI.

    Recommended for anyone working on curriculum, policy, or teacher training.


    🔬 What’s new with CAISE

    Last week I met with one of our partner schools. The agenda wasn’t about research design or interview questions. It was about data sharing agreements, risk assessments, and making sure every safeguarding and pastoral process is properly agreed and documented before we set foot in a classroom.

    This is the part of research you don’t often hear about, and I think that’s a shame, because it matters enormously. Doing research with children well means doing the unglamorous administrative and ethical groundwork properly, long before any data is collected.

    That said: we are keen to talk to other schools. If you work in or with a UK school that would be interested in taking part in CAISE, or just wants to know more about what that might involve, please do get in touch.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #2

    CAISE Notes – Issue #2

    This week: corporate accountability, AI in classrooms, and the questions that follow a tragedy.


    🔍 This week I’ve been thinking about…

    The story that’s stayed with me this week comes from Tumbler Ridge, British Columbia, where eight people, including students, were killed in a school shooting on February 10th. The suspected shooter had, months earlier, described violent scenarios to ChatGPT. OpenAI’s systems flagged those conversations automatically. Several employees raised concerns. And then the company decided the messages didn’t meet their threshold for alerting authorities, banned the account, and moved on.

    As someone who has done a lot of research into the importance of user privacy, I am alarmed at the thought of a world where AI companies routinely pass user conversations to law enforcement (although we know this is happening ever more in other datafied settings). These are genuinely hard calls, and false positives can have significant negative consequences.

    To bring it back to children, though: we are deploying generative AI that young people will use — in schools, in their pockets, in expensive private institutions — with institutions determining their own answers to difficult safeguarding questions. Who decides when a young person’s AI interaction warrants intervention? Who sets that threshold, and on what basis? And who is accountable when it goes wrong?

    In child safeguarding in schools, these questions have established (if complex) answers. Anyone involved in a school setting has a legal duty of care to raise safeguarding concerns. There are designated safeguarding leads. There are multi-agency frameworks. None of that infrastructure has followed AI into children’s lives. We are, in effect, treating children as early adopters of a system that hasn’t yet worked out its responsibilities to them.

    Issue #1 made the case that excluding children from technology isn’t the answer. But inclusion without protection isn’t the answer either. The work needs to be in building the frameworks, and genuinely understanding how children navigate these systems, that make both possible at once.


    📰 Three things worth your attention

    1. Meta knew parental supervision wasn’t working — and didn’t tell anyoneTechCrunch

    An internal Meta study called “Project MYST,” produced with the University of Chicago, surveyed 1,000 teens and their parents and found that household rules and parental controls had little measurable association with whether teens used social media compulsively. It also found that teens facing the most difficult life circumstances — bullying, difficult home situations — were least able to moderate their use. This didn’t come to light through publication (although it tracks with a lot of academic research): it emerged through testimony in an ongoing social media addiction lawsuit in Los Angeles. Instagram head Adam Mosseri said he couldn’t recall the study, despite a document suggesting he’d approved it. The finding matters because parental controls are consistently what legislators reach for when they want to look like they’re acting. This study suggests they’re not the lever they’re presented as, which should surprise no-one who has talked to children or parents about them.

    2. “Students are being treated like guinea pigs”404 Media

    A leaked investigation into Alpha School, a US private school that charges up to $65,000 a year and promises students can complete their core curriculum in two hours a day, thanks to AI, reveals a significant gap between the marketing and the reality. Internal documents show the school’s AI-generated reading comprehension questions were assessed by employees as doing “more harm than good”: questions with illogical answer choices, questions answerable without reading the article, and a hallucination rate high enough that the AI was deemed unsuitable for one key task. The company relies on AI to test the quality of its AI-generated lessons, which, predictably, doesn’t catch the problems. Meanwhile, student surveillance is pervasive: an app called StudyReel monitors screen activity, webcam footage, mouse movements and app usage, and hours of video recordings of students’ faces are stored in a Google Drive accessible to anyone with the link. One employee noted internally that they needed to get better at “selling” the surveillance to parents. Former employees credit students’ strong test scores not to the AI, but to the human tutors who cared about them.

    3. ChatGPT flagged the Tumbler Ridge shooter — and didn’t alert policeThe Verge

    Months before the February 10th attack, the suspected perpetrator’s conversations with ChatGPT, describing gun violence, triggered OpenAI’s automated review systems. Employees debated escalation. Leadership concluded the posts didn’t meet the bar for an imminent, credible threat, banned the account, and took no further action. OpenAI has since said it proactively contacted the RCMP (the Canadian police force) after the shooting, though the provincial government noted that OpenAI met with a provincial representative the day after the attack without disclosing it held potentially relevant evidence. Canada’s federal AI minister has raised concerns about OpenAI’s safety protocols. There are no easy answers here, and I think that’s the point: this is hard, it requires policy, and no one has built that policy yet.


    🔁 ICYMI

    UNICEF’s Tinkering with TechUNICEF Digital Education

    Against the week’s heavier stories, this is worth sitting with. UNICEF’s Tinkering with Tech initiative uses micro:bit devices (or similar) and design thinking to bring hands-on AI and STEM learning to children, with a deliberate focus on girls, and on building skills rather than consuming technology. From mid-2025, AI literacy was added to the programme, which is now expanding from its initial four-country pilot to Lao PDR, Ukraine, and Uzbekistan. The approach is meaningfully different from “here is an AI tool, use it”: students identify a real-world problem their community faces, build something, reflect. Worth bookmarking for anyone thinking about what AI education that centres children, rather than the technology, can look like.


    🔬 What’s new with CAISE

    Last Tuesday I was back at the University of Kent’s Institute of Cyber Security for Society (iCSS), where I did my PhD, talking about children’s use of generative AI. My talk, “Canaries in the AI Coal Mine”, drew on Project CAISE to argue that we need to shift the conversation from what we’re keeping children away from, to what we’re equipping them to deal with.

    The numbers keep moving: over 75% of UK 13-18 year olds are already using generative AI, and in one recent survey, over half had confided something serious to an AI companion. They aren’t waiting for permission. The adults are still arguing about whether to let them in while they’re already through the door.

    I got some excellent tricky questions from the audience, including what happens if your research participants suddenly get banned from a large number of the platforms you’re studying…I guess we will find out if and when it happens!


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone thinking about these questions.

  • CAISE Notes – Issue #1

    CAISE Notes – Issue #1

    Welcome to the very first CAISE Notes. Every week: what’s worth knowing about young people and AI, from the research, the policy, and the field.

    This week: policy theatre, student agency, and an entry-point for getting hands-on with AI.


    🔍 This week I’ve been thinking about…

    Monday brought another UK government announcement about consulting on banning social media for under-16s — the second in as many months. I’ll confess I’m itching to respond formally, but there’s no formal consultation to respond to. I think next month?

    Until then: the arguments against bans aren’t new. Young people face two distinct types of harm online: platform design problems (algorithmic amplification, infinite scroll); and criminal behaviour (deepfakes, harassment, fraud) that AI has made devastatingly easier to commit. Neither of these harms are age-specific — both affect everyone and require solutions that work for everyone.

    Bans create a third harm that is unique to children: by choosing exclusion over education, we deny young people the literacy that comes from a mixture of education and lived experience. This generation could have been the first to navigate these risks well from childhood. Instead, it looks like we’re ensuring they won’t.

    What’s striking about this political moment is the opportunity cost. The political capital spent on age restrictions could instead demand platforms fix their design for everyone, and build genuine digital literacy. That’s harder. But it’s the work that would actually help.

    This is why Project CAISE feels urgent: understanding how young people actually navigate these technologies seems foundational to any policy that might help rather than harm them.


    📰 Three things worth your attention

    1. The UK Government’s proposals, explainedThe Guardian

    All the coverage summarises a Substack post from Keir Starmer, so here’s what you need to know: UK Government are looking at the ban on social media (which would be fast tracked into law by some legislative sleight of hand); extending online safety rules to AI chatbots (good), and forcing social media companies to provide children’s data after their death to coroners or Ofcom (also good). Worth reading alongside the House of Commons Library briefing for the evidence landscape.

    2. What 200 students actually said when asked about AI policyHonolulu Civil Beat

    A high school sophomore writes about a Stanford-facilitated deliberation event where 200 students across 19 states worked through the future of AI in schools. Their conclusion was neither ban it nor embrace it uncritically — it was understand it. This is what genuine student voice looks like, and it’s a useful counterpoint to policies developed without it. (And gives me hope for excellent outcomes for CAISE!)

    3. Adventures in vibe codingNaomi Alderman, Whatever Works

    Novelist Naomi Alderman spent a weekend building her own personal software tools using AI, with no prior coding experience. Her reflection on what it felt like (“a feeling of mastery and agency”) is a useful provocation: if we want young people to be critical, confident navigators of AI, the adults around them need to get their hands dirty too. (Paywalled, but the free preview makes the key point.)


    🔁 ICYMI

    “What I wish my parents or carers knew…”Children’s Commissioner for England, December 2025

    A practical guide for parents on navigating children’s digital lives — but notice whose voice frames it. The title is drawn from what children said they wished adults understood. There’s also a companion activity pack designed to be used directly with children, which is worth bookmarking for anyone working in schools, or any parent. In a week where policy is being made about young people rather than with them, this is a wonderfully useful document that I will refer back to repeatedly, I think.


    🔬 What’s new with CAISE

    The ethics proposal is in! For those of you who aren’t researchers, this is the major hurdle we need to clear before we can get on and research. It’s not a small piece of work — to do it right, you need to know exactly what you’re going to do, how, and what the risks are. Hopefully, we’ll have some good news very soon.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who’s thinking about these questions.