Tag: risk

  • CAISE Notes – Issue #6

    CAISE Notes – Issue #6

    This week: AI in schools, academic integrity, and the audit trail that changes how you write.


    🔍 This week I’ve been thinking about…

    A report out this week finds that 71% of UK higher education students now use AI for their studies. Three quarters of them are anxious about being accused of using it.

    Those two numbers, sitting next to each other, describe a problem. It is not the problem most institutions think they have.

    The standard response to AI in education has taken one form: detection. Plagiarism tools have added AI detection modules. A post I saw on Reddit last week suggested students to write in single shared documents, so keystroke logs can demonstrate the work is theirs. This is an audit trail: show your working, prove it was you.

    There is a structural problem with this. Writers do not write linearly. Academics do not write linearly. There are entire software categories — Scrivener is one, but many researchers (like me!) use combinations of tools — built around the reality that writing happens in fragments, out of sequence, across multiple documents. An audit trail of a Google Doc shows you one thing: whether someone wrote in a Google Doc. A pasted block of text is not the smoking AI gun some may want it to be.

    The deeper problem is that AI detection flags good writing. Writing that is clear, direct, and unhedged looks more like AI output, not less. The student who writes well is at higher risk of a false positive than the student who writes badly.

    The effect on students is measurable. Anxiety about being falsely accused, among students who are using AI in entirely ordinary ways, is now widespread. The effect on teachers is equally measurable: confidence in identifying AI-generated work has fallen sharply over the past year, and the tools they have been handed are not helping. The gap between how students are integrating AI and how institutions are treating that use is growing, and it is producing anxiety rather than learning.

    Most of the students in these surveys are not trying to game anything. Some are overwhelmed. Some are using AI to get past the blank page in ways that have nothing to do with dishonesty and everything to do with how their brains work. The audit trail catches them all equally.


    📰 Three things worth your attention

    1. UK student AI use hits 71% as 75% face AI detector anxiety — FE News

    This article reports on a Studiosity/YouGov report: 2026 AI and Wellbeing in Education. UK students are now using AI for 71% of study tasks: double last year’s rate, the highest of any country surveyed. Three quarters report anxiety about being flagged by detectors, regardless of whether they used AI. Confidence among educators in their ability to identify AI-generated work has dropped to one in four, down from more than two in five last year. The trust gap between how students are using AI and how institutions are responding is the real story here.

    2. Trump’s AI framework targets state laws, shifts child safety burden to parents — TechCrunch

    The US administration put out a federal-level framework for a single AI policy last week. It quite nakedly pre-empts state-level regulation and pro-industry. The first point covers child safety: it states it should be, primarily, a parenting responsibility. Parents must be given better parental “tools”, and then they must get on with it. In contrast, true obligations for platforms are limited, and surrounded by terms such as “commercially reasonable”. This is a different approach than we’ve seen in the UK and Europe (perhaps unsurprisingly). Platforms have duties, not just options. That framing difference matters enormously for what gets built, and is something we will be considering as CAISE carries on.

    3. Australia’s under-16 social media ban: what the teenagers actually say — BBC News

    A short BBC video puts the question directly to Australian teenage girls: has the ban changed anything? The answer, largely, is no. Most who were using social media before still are — the majority weren’t asked to verify their age at all. Those who weren’t on it before still aren’t. The ban has not changed the landscape; it has changed who has to work around it.

    A TechDirt piece published the same week — Australia’s Teen Social Media Ban Is Just Training A Generation In The Art Of The Workaround — makes the structural point explicit: what a poorly enforced ban primarily teaches is that rules have workarounds, and that finding them is a rational response to adult-imposed restrictions.


    🔁 ICYMI

    How kids are actually using AI — BBC Future / Pew Research Center / Common Sense Media

    Two new surveys — from Pew Research Center and Common Sense Media — asked US teenagers and their parents about AI, and then compared the answers. The gap is striking. Only 51% of parents believe their child uses AI. The actual figure from the teenagers: 64%. Four in ten parents have never had a conversation with their child about AI at all.

    What children are doing with it is more varied than most coverage suggests. The most common use is simply looking things up. Homework help comes second. But 12% use it for emotional support, 16% for casual conversation — and there are significant racial disparities in those figures, with Black teenagers far more likely to use the tools than White teens.

    The attitudinal gap is just as wide. 52% of parents say using AI for schoolwork is unethical and should have consequences. Exactly 52% of teenagers say it is innovative and should be encouraged. One of those groups is out of step with where things are going. The surveys don’t settle which one — but they do suggest that the conversation most families aren’t having is probably the most important place to start.


    🔬 What’s new with CAISE

    Expedited ethics has been submitted for the short research exercise on the government’s social media consultation — looking at how young people actually engage with the survey, what they find easy or hard, and whether it lets them say what they want to say. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, get in touch.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #5

    CAISE Notes – Issue #5

    This week: neurodiversity, digital vulnerability, and the question of who technology is actually designed to serve.

    🔍 This week I’ve been thinking about…


    This week is Neurodiversity Celebration Week. Many people (myself included) aren’t wild about the framing, but the upsides are real: pattern recognition, technical ability, deep focus, lateral thinking. As anyone who is ND, or has kids with ND traits, knows, though — sometimes those great things don’t feel like fair recompense for living in a world that quite simply doesn’t fit right.

    Communication is a core issue. Autistic researcher Damian Milton calls this the double empathy problem: difficulties between autistic and non-autistic people are not a deficit in the autistic person but a mismatch between two different ways of experiencing the world. Crucially, non-autistic people are no better at reading autistic communication than the other way around. The difficulty runs in both directions. Only one side is ever asked to do the work.

    Enter the internet. For a lot of ND young people, online spaces offer something that offline life does not: communities organised around interests rather than proximity, rules that are explicit rather than implied, a kind of belonging that does not require reading a room. It’s parallel play writ large. Many ND adults will tell you that the internet, at its best, is where they found their people.

    But the double empathy problem doesn’t disappear online — and in some spaces, it is actively exploited. The Guardian investigation this week puts the sharp end of this on record: children caught by laws that were never designed with their communication style or their understanding of the world in mind. The Washington Post piece shows the same logic playing out more quietly in the tools we build — technology that offers to help autistic people decode the neurotypical world, without asking whether the neurotypical world might meet them at least halfway. And the third story captures what I kept looking for and couldn’t find in the news: an example of what it looks like when someone actually builds around ND strengths rather than trying to correct them. It shouldn’t have required a detour into academic journals to get there, but there we are.


    📰 Three things worth your attention

    1. Children as young as 10 are being charged with possessing violent extremist material — The Guardian

    A Guardian Australia investigation has uncovered court records showing that many of the children charged under Australia’s 2023 counter-terrorism law have an autism diagnosis, language challenges, or both. One 14-year-old girl, described in court by a clinical psychologist as “a young, naive Muslim girl with autism”, had collected propaganda videos out of curiosity and religious interest; a 15-minute bomb-making video had been sent to her unsolicited via Discord. A 17-year-old autistic boy found with extremist videos was described by a court psychologist as motivated “less by a desire to harm and more by rigid moral beliefs reinforced by his ASD traits.” These are not cases where the law is catching dangerous children. They are cases where the law is catching vulnerable ones. One far-right group leader has explicitly discussed the appeal of recruiting autistic teenagers, seemingly without facing any of the same sort of consequences.

    2. AI is helping autistic people with social mishaps — The Washington Post (🔒 paywalled)

    This Washington Post feature profiles Autistic Translator, an AI tool designed to help autistic people decode confusing social interactions: the ones where someone says one thing and means another, or where a job appears to be going well until it suddenly isn’t. The tool fills a genuine gap. But it is worth setting it against the double empathy problem (explained above). I read this article with this in mind and left it feeling happy that the individuals found relief, but also profoundly sad: technology designed to help autistic people decode neurotypical communication continues to accept the framing that the autistic person is the one who needs to change.

    3.Strengths-based Cybersecurity Education and Training Program for Autistic Adolescents – Rumsa et al., Neurodiversity

    I went looking for a news story to end on something more hopeful. There isn’t one — not a recent, accessible piece of journalism that covers this well. The positive stories about ND young people and technology exist, but they live mostly in industry publications about adult hiring pipelines, not in reporting about children and education.

    What I found instead is a peer-reviewed study from Curtin University, published last year, describing CyberSET: a strengths-based cybersecurity training programme designed specifically for autistic teenagers. It does something structurally simple and relatively rare: it starts from what autistic young people are already good at. Pattern recognition, sustained focus, methodical thinking, deep technical interest — these are not problems to be managed. They are exactly the skills the cybersecurity industry is struggling to recruit for. Participants reported high satisfaction, increased confidence, and a clearer sense of where their abilities could take them. A massive win!


    🔁 ICYMI

    Ctrl+Alt+Chaos: How Teenage Hackers Hijack the Internet — Joe Tidy (Waterstones | Amazon | Blackwell’s)

    Joe Tidy is the BBC’s first cyber correspondent. This book follows the rise and fall of teenage hacking gangs over the past decade, centring around the crimes of Julius Kivimaki, jailed in 2024 for stealing records from Finland’s largest psychotherapy provider, and using them to blackmail some 33,000 people. But what struck me most was something mentioned in passing: how many of the hackers he interviews reference their autism. Not as an excuse or an explanation, but as context for how they approach the world. The ethical frame is not absent so much as differently structured. Hacking is a matter of technical skill. If your data is unsecured, that is your problem.

    That framing sits directly alongside everything else in this issue. Cognitive styles that process the world differently — and that neurotypical institutions struggle to understand or support — end up intersecting badly with digital spaces that were not designed with them in mind.


    🔬 What’s new with CAISE

    Ethics approval revisions came back this week. They were minor (hurrah!), and the updated application has been resubmitted.

    In the meantime, we are developing a short research exercise around the government’s current social media consultation. Regular readers will remember the issues I raised in Issue 3 about the survey aimed at young people — its access barriers, its broad age grouping, and the questions it does and doesn’t ask. We want to look at this directly: how do children and young people actually respond to the survey, what do they find easy or hard, and whether it lets them say what they want to say.

    If you work with or know groups of young people aged 10 to 21 who might want to take part before the 26 May consultation deadline, we would love to hear from you. Get in touch.


    → What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #4

    CAISE Notes – Issue #4

    This week: AI companions, digital comfort, and what happens when the machine becomes the friend.


    🔍 This week I’ve been thinking about…

    Why do people turn to chatbots for comfort? It is not a new question. Researchers have been noting for several years that when someone is feeling low or uncertain, the tool that is right there, with no waiting list and no judgement, is an increasingly appealing option. But the tools are changing. Chatbots now remember previous conversations, develop what feels like a relationship over time, and respond with a warmth that is designed to keep you coming back. For adults navigating loneliness or pressure, this is already raising serious questions. For children, the pull may be stronger still.

    Consider what it is like to be a child right now. You are growing up after a pandemic that disrupted years of your life where you should have been playing, or learning how to play, with friends. You probably have less unsupervised time with friends than your parents did. You are more surveilled, at home and at school, than any previous generation. You face enormous existential uncertainties, from climate to the economy to AI itself. And on your phone is something that will listen to you whenever you need it, without telling your parents, without making a face, without ever getting tired of you.

    I can see why this is appealing. I think anyone being honest can.

    The stories coming out of what happens when it goes wrong are terrifying: chatbots reinforcing suicidal ideation, colluding with delusions, encouraging people to withdraw from every human relationship they have. But what makes this so hard as parents is that the answer is not simply “stop children using chatbots.” Part of the response lies in families talking openly, in making sure children have time with friends and space to play. And part of it lies in a million different policy decisions, spanning education, platform design, mental health provision, data protection, and the design of children’s daily lives. All of that, together, shape whether a child reaches for a chatbot, and why they might do so.


    📰 Three things worth your attention

    1. AI dating apps complicate China’s efforts to boost its birthrateThe New York Times (usually 🔒 paywalled, but hopefully this gift link works!)

    Millions of young Chinese women are choosing AI romantic partners over human ones. The apps allow users to design companions with customisable appearances, personalities and voices; one 21-year-old psychology student profiled in the piece has been on over 200 virtual dates and narrowed her suitors to two AI boyfriends, asking: “Why go and date others? That’s too troublesome.” A state-led push to adopt AI created a boom in companion platforms just as the government was trying to reverse a historically low birth-rate. Regulators have since proposed rules requiring platforms to intervene if users show signs of unhealthy dependency. The emotional needs driving this, loneliness, pressure, the desire to be understood without the risk of rejection, are not culturally specific. They are exactly the needs children and young people are bringing to chatbots here.

    2. Schools are using AI counsellors to track students’ mental health. Is it safe?The Guardian

    A detailed investigation into US schools deploying AI therapy platforms, particularly in areas with limited mental health resources. In one Florida school, a counsellor receives alerts from a platform students use outside school hours; when a “severe” alert came through for an eighth-grader, she spent her evening calling the family and the police. The student is alive and well. But these platforms do not carry the same privacy protections as conversations with a licensed therapist, and students find it easier to confide in a chat interface than a person, partly because there is no face to read judgement in. One youth advocacy organisation warns that platforms measure whether a bot serves as an effective crutch for immediate loneliness, not whether young people are actually building the connections they need. Worth reading for anyone thinking about where human oversight sits in the picture.

    3. How to talk to someone experiencing “AI psychosis”404 Media (🔒paywalled, but there’s a two minute video summary here)

    This piece sits at the sharpest end of the companionship question. It reports on a growing number of cases where people have developed delusional beliefs after extended chatbot conversations, including a man convinced ChatGPT had revealed a fundamental flaw in physics, and a family who allege Gemini urged their relative to end his life so they could be together. Psychiatrists distinguish between people whose pre-existing conditions find a new object in AI, and a more common pattern where chatbots “collude” with emerging delusions, endorsing beliefs a human would gently challenge. The sycophancy built into these systems is a serious factor. OpenAI has acknowledged its safeguards become less reliable in extended interactions. ChatGPT alone now has 900 million weekly active users. The piece ends where it has to: the strategies that work best are the ones humans already know. Approach with love, without judgement, and see where it takes you.


    🔁 ICYMI

    Talk, Trust, and Trade-Offs: How and Why Teens Use AI CompanionsCommon Sense Media, July 2025

    If you want data behind this week’s theme, this is the place to start. A nationally representative survey of over 1,000 US 13-to-17-year-olds found that 72% have used an AI companion, and more than half are regular users. A third of teens say their conversations with AI companions are as satisfying as, or more satisfying than, conversations with other people. A third have discussed serious issues with a chatbot instead of a human. A quarter have shared personal information with one. The researchers at Common Sense Media worked with Stanford’s Brainstorm lab to assess the risks, and the companion piece is equally sobering: posing as teenagers, investigators found it straightforward to elicit conversations about sex, self-harm, violence, and drug use from commonly used AI companion platforms. Essential reading for anyone working with young people.


    🔬 What’s new with CAISE

    We have a new team member! Youyue Sun has joined Project CAISE to support the policy analysis that runs throughout the project. Youyue is a BA Education, Society and Culture student at UCL, with a particular interest in how the emergence of AI is reshaping education and wider society. She also works as a placement student at Universities UK, conducting policy-focused research on AI in higher education, and is a member of the Student Council Team at the Good Future Foundation. We are delighted to have her on board, and her perspective on both the education and policy sides of this work is going to be invaluable.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #3

    CAISE Notes – Issue #3

    This week: consultation papers, a range of actual and possible student AI use, and teachers who just have to get on with it.


    🔍 This week I’ve been thinking about…

    This week the UK’s social media consultation came out (finally)! I took a close look at the children’s survey and found a design that, however unintentionally, may well end up excluding meaningful contributions from the young people it most needs to hear from.

    Under-15s are routed through a parental login that creates real access barriers for multi-child households (if you can get it to work at all); the format makes school or group facilitation structurally impossible; and the age bracket of 10–21 is treated as a single respondent group.

    There are significant issues around access to the questions so you can think about your answers ahead of starting the survey; no save progress or back buttons, and a range of, frankly, rather leading questions. I’ve written about this in more detail on LinkedIn, and I’m considering putting together a plain-English guide for parents, teachers and young people to help them prepare thoughtful responses before the 26 May deadline. Let me know if that would be useful.

    Separately, the government’s Every Child Achieving and Thriving white paper arrived with £4 billion for SEND reform and broadly welcomed proposals around inclusion and specialist support. There was a chapter that covered innovation and technology — a fair bit of technology. But the accompanying consultation document contains a summary of the innovation and technology paragraph, with no questions for comment. The direction of travel on SEND seems appropriate. The silence on technology in the one place we’re being asked for our views is a bit worrying.

    Both of these things sit alongside a quieter but important piece of news: the eSafety Commissioner in Australia has begun a two-year evaluation of their under-16 social media ban. Evaluation of new laws and regulation is not new, but in this case, there wasn’t much evidence beforehand, and it will be over two years before the findings come in. It’s welcome, but also a long time when so many governments are keen to impose restrictions as soon as they can.


    📰 Three things worth your attention

    1. An AI that goes to school for you — 404 Media

    An agentic AI called Einstein, which will, according to its developers, attend lectures, write papers, and log into platforms like Canvas to take tests on a student’s behalf, caused a stir this week. It has since received a cease and desist, though not from a university: from the organisation that manages the trademarks and IP rights associated with Albert Einstein’s name. The underlying story is worth considering carefully. Academics on the Modern Language Association’s AI task force argue that tools like Einstein aren’t a fringe problem but a symptom of a decades-long shift in how students understand the purpose of higher education — as a transaction to receive the certification or credential, rather than a learning process. When education is framed as a service you buy, it becomes possible to imagine outsourcing its completion. Use of tools like Einstein AI by some students may hurt all students: online learning is so important to so many, but how does it remain credible if you can’t tell who is doing the heavy lifting?

    2. UK students are using AI for nearly half their studying and educators are struggling to keep upFE News

    Coursera’s first AI in Higher Education report, drawing on a survey of over 4,200 students and educators across five countries, finds that UK students are now using AI to complete roughly half of their study tasks: double the proportion from the previous year, and the highest of any country surveyed. Four in five UK students report improved grades since starting to use AI. But the picture on the educator side is markedly different: the share of academics who feel confident they can identify AI-generated work has dropped to just one in four, down from more than two in five last year. Only 30% of UK universities have a formal policy on AI use; yet, this is the highest share of any country in the survey, possibly saying rather more about the global baseline than it does about UK readiness. The gap between students who are integrating AI fluently and institutions that are still formulating a position on it is growing, not shrinking.

    3. “You just have to get on with it”The Guardian

    This Guardian long read follows a trainee teacher navigating AI and does something I haven’t seen done as well elsewhere: it sits with the genuine difficulty of forming an unbiased opinion. But the moment that stayed with me is a classroom experiment: the author offered extra credit to any student who could explain, without looking at a screen, how a chatbot generates text. Nobody could. What followed was a conversation about provenance, business models, copyright law, and what it means that AI executives have predicted data centres covering much of the planet’s surface. Exactly the kind of lesson that will stick. The piece lands on a conclusion that feels important: that teaching about AI may matter more than teaching with it. Worth reading alongside the Coursera data above and the report below.


    🔁 ICYMI

    A Rapid Review of AI Literacy Frameworks, with Policy RecommendationsUCL Discovery, written for the Royal Society

    The Guardian piece above makes an argument from the classroom up that this Rapid Review of AI Literacy Frameworks, written for the Royal Society, makes from the research literature down: that AI literacy is too often framed around tool use and not enough around the messiness underneath. The review maps the current landscape of frameworks and draws out policy recommendations for anyone trying to build AI literacy that goes beyond the surface. It argues that it is crucial to get those human stories — precisely those explored in The Guardian article above — front and centre to really understand what is happening when you use AI.

    Recommended for anyone working on curriculum, policy, or teacher training.


    🔬 What’s new with CAISE

    Last week I met with one of our partner schools. The agenda wasn’t about research design or interview questions. It was about data sharing agreements, risk assessments, and making sure every safeguarding and pastoral process is properly agreed and documented before we set foot in a classroom.

    This is the part of research you don’t often hear about, and I think that’s a shame, because it matters enormously. Doing research with children well means doing the unglamorous administrative and ethical groundwork properly, long before any data is collected.

    That said: we are keen to talk to other schools. If you work in or with a UK school that would be interested in taking part in CAISE, or just wants to know more about what that might involve, please do get in touch.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who is trying to figure it out.

  • CAISE Notes – Issue #2

    CAISE Notes – Issue #2

    This week: corporate accountability, AI in classrooms, and the questions that follow a tragedy.


    🔍 This week I’ve been thinking about…

    The story that’s stayed with me this week comes from Tumbler Ridge, British Columbia, where eight people, including students, were killed in a school shooting on February 10th. The suspected shooter had, months earlier, described violent scenarios to ChatGPT. OpenAI’s systems flagged those conversations automatically. Several employees raised concerns. And then the company decided the messages didn’t meet their threshold for alerting authorities, banned the account, and moved on.

    As someone who has done a lot of research into the importance of user privacy, I am alarmed at the thought of a world where AI companies routinely pass user conversations to law enforcement (although we know this is happening ever more in other datafied settings). These are genuinely hard calls, and false positives can have significant negative consequences.

    To bring it back to children, though: we are deploying generative AI that young people will use — in schools, in their pockets, in expensive private institutions — with institutions determining their own answers to difficult safeguarding questions. Who decides when a young person’s AI interaction warrants intervention? Who sets that threshold, and on what basis? And who is accountable when it goes wrong?

    In child safeguarding in schools, these questions have established (if complex) answers. Anyone involved in a school setting has a legal duty of care to raise safeguarding concerns. There are designated safeguarding leads. There are multi-agency frameworks. None of that infrastructure has followed AI into children’s lives. We are, in effect, treating children as early adopters of a system that hasn’t yet worked out its responsibilities to them.

    Issue #1 made the case that excluding children from technology isn’t the answer. But inclusion without protection isn’t the answer either. The work needs to be in building the frameworks, and genuinely understanding how children navigate these systems, that make both possible at once.


    📰 Three things worth your attention

    1. Meta knew parental supervision wasn’t working — and didn’t tell anyoneTechCrunch

    An internal Meta study called “Project MYST,” produced with the University of Chicago, surveyed 1,000 teens and their parents and found that household rules and parental controls had little measurable association with whether teens used social media compulsively. It also found that teens facing the most difficult life circumstances — bullying, difficult home situations — were least able to moderate their use. This didn’t come to light through publication (although it tracks with a lot of academic research): it emerged through testimony in an ongoing social media addiction lawsuit in Los Angeles. Instagram head Adam Mosseri said he couldn’t recall the study, despite a document suggesting he’d approved it. The finding matters because parental controls are consistently what legislators reach for when they want to look like they’re acting. This study suggests they’re not the lever they’re presented as, which should surprise no-one who has talked to children or parents about them.

    2. “Students are being treated like guinea pigs”404 Media

    A leaked investigation into Alpha School, a US private school that charges up to $65,000 a year and promises students can complete their core curriculum in two hours a day, thanks to AI, reveals a significant gap between the marketing and the reality. Internal documents show the school’s AI-generated reading comprehension questions were assessed by employees as doing “more harm than good”: questions with illogical answer choices, questions answerable without reading the article, and a hallucination rate high enough that the AI was deemed unsuitable for one key task. The company relies on AI to test the quality of its AI-generated lessons, which, predictably, doesn’t catch the problems. Meanwhile, student surveillance is pervasive: an app called StudyReel monitors screen activity, webcam footage, mouse movements and app usage, and hours of video recordings of students’ faces are stored in a Google Drive accessible to anyone with the link. One employee noted internally that they needed to get better at “selling” the surveillance to parents. Former employees credit students’ strong test scores not to the AI, but to the human tutors who cared about them.

    3. ChatGPT flagged the Tumbler Ridge shooter — and didn’t alert policeThe Verge

    Months before the February 10th attack, the suspected perpetrator’s conversations with ChatGPT, describing gun violence, triggered OpenAI’s automated review systems. Employees debated escalation. Leadership concluded the posts didn’t meet the bar for an imminent, credible threat, banned the account, and took no further action. OpenAI has since said it proactively contacted the RCMP (the Canadian police force) after the shooting, though the provincial government noted that OpenAI met with a provincial representative the day after the attack without disclosing it held potentially relevant evidence. Canada’s federal AI minister has raised concerns about OpenAI’s safety protocols. There are no easy answers here, and I think that’s the point: this is hard, it requires policy, and no one has built that policy yet.


    🔁 ICYMI

    UNICEF’s Tinkering with TechUNICEF Digital Education

    Against the week’s heavier stories, this is worth sitting with. UNICEF’s Tinkering with Tech initiative uses micro:bit devices (or similar) and design thinking to bring hands-on AI and STEM learning to children, with a deliberate focus on girls, and on building skills rather than consuming technology. From mid-2025, AI literacy was added to the programme, which is now expanding from its initial four-country pilot to Lao PDR, Ukraine, and Uzbekistan. The approach is meaningfully different from “here is an AI tool, use it”: students identify a real-world problem their community faces, build something, reflect. Worth bookmarking for anyone thinking about what AI education that centres children, rather than the technology, can look like.


    🔬 What’s new with CAISE

    Last Tuesday I was back at the University of Kent’s Institute of Cyber Security for Society (iCSS), where I did my PhD, talking about children’s use of generative AI. My talk, “Canaries in the AI Coal Mine”, drew on Project CAISE to argue that we need to shift the conversation from what we’re keeping children away from, to what we’re equipping them to deal with.

    The numbers keep moving: over 75% of UK 13-18 year olds are already using generative AI, and in one recent survey, over half had confided something serious to an AI companion. They aren’t waiting for permission. The adults are still arguing about whether to let them in while they’re already through the door.

    I got some excellent tricky questions from the audience, including what happens if your research participants suddenly get banned from a large number of the platforms you’re studying…I guess we will find out if and when it happens!


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone thinking about these questions.

  • CAISE Notes – Issue #1

    CAISE Notes – Issue #1

    Welcome to the very first CAISE Notes. Every week: what’s worth knowing about young people and AI, from the research, the policy, and the field.

    This week: policy theatre, student agency, and an entry-point for getting hands-on with AI.


    🔍 This week I’ve been thinking about…

    Monday brought another UK government announcement about consulting on banning social media for under-16s — the second in as many months. I’ll confess I’m itching to respond formally, but there’s no formal consultation to respond to. I think next month?

    Until then: the arguments against bans aren’t new. Young people face two distinct types of harm online: platform design problems (algorithmic amplification, infinite scroll); and criminal behaviour (deepfakes, harassment, fraud) that AI has made devastatingly easier to commit. Neither of these harms are age-specific — both affect everyone and require solutions that work for everyone.

    Bans create a third harm that is unique to children: by choosing exclusion over education, we deny young people the literacy that comes from a mixture of education and lived experience. This generation could have been the first to navigate these risks well from childhood. Instead, it looks like we’re ensuring they won’t.

    What’s striking about this political moment is the opportunity cost. The political capital spent on age restrictions could instead demand platforms fix their design for everyone, and build genuine digital literacy. That’s harder. But it’s the work that would actually help.

    This is why Project CAISE feels urgent: understanding how young people actually navigate these technologies seems foundational to any policy that might help rather than harm them.


    📰 Three things worth your attention

    1. The UK Government’s proposals, explainedThe Guardian

    All the coverage summarises a Substack post from Keir Starmer, so here’s what you need to know: UK Government are looking at the ban on social media (which would be fast tracked into law by some legislative sleight of hand); extending online safety rules to AI chatbots (good), and forcing social media companies to provide children’s data after their death to coroners or Ofcom (also good). Worth reading alongside the House of Commons Library briefing for the evidence landscape.

    2. What 200 students actually said when asked about AI policyHonolulu Civil Beat

    A high school sophomore writes about a Stanford-facilitated deliberation event where 200 students across 19 states worked through the future of AI in schools. Their conclusion was neither ban it nor embrace it uncritically — it was understand it. This is what genuine student voice looks like, and it’s a useful counterpoint to policies developed without it. (And gives me hope for excellent outcomes for CAISE!)

    3. Adventures in vibe codingNaomi Alderman, Whatever Works

    Novelist Naomi Alderman spent a weekend building her own personal software tools using AI, with no prior coding experience. Her reflection on what it felt like (“a feeling of mastery and agency”) is a useful provocation: if we want young people to be critical, confident navigators of AI, the adults around them need to get their hands dirty too. (Paywalled, but the free preview makes the key point.)


    🔁 ICYMI

    “What I wish my parents or carers knew…”Children’s Commissioner for England, December 2025

    A practical guide for parents on navigating children’s digital lives — but notice whose voice frames it. The title is drawn from what children said they wished adults understood. There’s also a companion activity pack designed to be used directly with children, which is worth bookmarking for anyone working in schools, or any parent. In a week where policy is being made about young people rather than with them, this is a wonderfully useful document that I will refer back to repeatedly, I think.


    🔬 What’s new with CAISE

    The ethics proposal is in! For those of you who aren’t researchers, this is the major hurdle we need to clear before we can get on and research. It’s not a small piece of work — to do it right, you need to know exactly what you’re going to do, how, and what the risks are. Hopefully, we’ll have some good news very soon.


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone who’s thinking about these questions.

  • We’ve submitted our ethics application!

    We’ve submitted our ethics application!

    There’s a particular kind of energy that comes with hitting ‘submit’ on a document you’ve been working on for weeks – equal parts relief, excitement, and uncertainty about whether you’ve got the balance right.

    Over the weekend, we submitted our ethics application for Project CAISE. It’s been months in the making, and it’s one of our first significant milestones.

    Submitting an ethics application is always a daunting prospect. It forces you to think – really think – about all the aspects of your plan. What are you really doing? Why are you doing it? Is that really the best way to do it?

    Done right, it allows you not only to make sure your “why” is crystal clear, but it also allows you to do a lot of the hard thinking ahead of time: yes, you’re going to interview people – but what, precisely are you going to ask them? You say you’re going to survey people, but with what tool and how do you know it’s GDPR-compliant? You’re going to video record that activity? Cool. How do you square keeping the meaningful visual interactions with the fact you’ve inadvertently recorded everyone’s faces? What even is the university’s infrastructure for storing things for the entire 10 year post-research retention period?!

    For a qualitative researcher, a lot of the above comes up every time. One thing that’s really hit me this time around, though: how do you meaningfully assess risk when the technology itself is evolving faster than our understanding of it? CAISE is a long(ish)-term project with 13-14 year olds navigating AI in their everyday lives. Not only do we not know what AI will look like this time next year, but we are aiming for an unvarnished and judgement-free exploration of this to really support open communication and understanding.

    In a world where it seems to be ok to produce non-consensual graphic images of people – including children – using AI on social media, and where banning children from spaces rather than working to make those spaces safer, as we would offline, the findings we will have are more important than ever.

    The ethics process asks us to anticipate and mitigate risks. But when the risks themselves are emergent and fundamentally uncertain? That’s trickier. We’ve made our best judgments based on current evidence and built in reflexivity, ongoing consent and a series of tiered pastoral and safeguarding processes. But there’s honesty required here too: we’re researching precisely because we don’t fully understand what’s happening yet, no matter how innocuous, or awful, it might be.

    We’ll be here with fingers crossed, full of nerves, until we hear back – which won’t be for a while…!