Tag: moderation

  • CAISE Notes – Issue #2

    CAISE Notes – Issue #2

    This week: corporate accountability, AI in classrooms, and the questions that follow a tragedy.


    🔍 This week I’ve been thinking about…

    The story that’s stayed with me this week comes from Tumbler Ridge, British Columbia, where eight people, including students, were killed in a school shooting on February 10th. The suspected shooter had, months earlier, described violent scenarios to ChatGPT. OpenAI’s systems flagged those conversations automatically. Several employees raised concerns. And then the company decided the messages didn’t meet their threshold for alerting authorities, banned the account, and moved on.

    As someone who has done a lot of research into the importance of user privacy, I am alarmed at the thought of a world where AI companies routinely pass user conversations to law enforcement (although we know this is happening ever more in other datafied settings). These are genuinely hard calls, and false positives can have significant negative consequences.

    To bring it back to children, though: we are deploying generative AI that young people will use — in schools, in their pockets, in expensive private institutions — with institutions determining their own answers to difficult safeguarding questions. Who decides when a young person’s AI interaction warrants intervention? Who sets that threshold, and on what basis? And who is accountable when it goes wrong?

    In child safeguarding in schools, these questions have established (if complex) answers. Anyone involved in a school setting has a legal duty of care to raise safeguarding concerns. There are designated safeguarding leads. There are multi-agency frameworks. None of that infrastructure has followed AI into children’s lives. We are, in effect, treating children as early adopters of a system that hasn’t yet worked out its responsibilities to them.

    Issue #1 made the case that excluding children from technology isn’t the answer. But inclusion without protection isn’t the answer either. The work needs to be in building the frameworks, and genuinely understanding how children navigate these systems, that make both possible at once.


    📰 Three things worth your attention

    1. Meta knew parental supervision wasn’t working — and didn’t tell anyoneTechCrunch

    An internal Meta study called “Project MYST,” produced with the University of Chicago, surveyed 1,000 teens and their parents and found that household rules and parental controls had little measurable association with whether teens used social media compulsively. It also found that teens facing the most difficult life circumstances — bullying, difficult home situations — were least able to moderate their use. This didn’t come to light through publication (although it tracks with a lot of academic research): it emerged through testimony in an ongoing social media addiction lawsuit in Los Angeles. Instagram head Adam Mosseri said he couldn’t recall the study, despite a document suggesting he’d approved it. The finding matters because parental controls are consistently what legislators reach for when they want to look like they’re acting. This study suggests they’re not the lever they’re presented as, which should surprise no-one who has talked to children or parents about them.

    2. “Students are being treated like guinea pigs”404 Media

    A leaked investigation into Alpha School, a US private school that charges up to $65,000 a year and promises students can complete their core curriculum in two hours a day, thanks to AI, reveals a significant gap between the marketing and the reality. Internal documents show the school’s AI-generated reading comprehension questions were assessed by employees as doing “more harm than good”: questions with illogical answer choices, questions answerable without reading the article, and a hallucination rate high enough that the AI was deemed unsuitable for one key task. The company relies on AI to test the quality of its AI-generated lessons, which, predictably, doesn’t catch the problems. Meanwhile, student surveillance is pervasive: an app called StudyReel monitors screen activity, webcam footage, mouse movements and app usage, and hours of video recordings of students’ faces are stored in a Google Drive accessible to anyone with the link. One employee noted internally that they needed to get better at “selling” the surveillance to parents. Former employees credit students’ strong test scores not to the AI, but to the human tutors who cared about them.

    3. ChatGPT flagged the Tumbler Ridge shooter — and didn’t alert policeThe Verge

    Months before the February 10th attack, the suspected perpetrator’s conversations with ChatGPT, describing gun violence, triggered OpenAI’s automated review systems. Employees debated escalation. Leadership concluded the posts didn’t meet the bar for an imminent, credible threat, banned the account, and took no further action. OpenAI has since said it proactively contacted the RCMP (the Canadian police force) after the shooting, though the provincial government noted that OpenAI met with a provincial representative the day after the attack without disclosing it held potentially relevant evidence. Canada’s federal AI minister has raised concerns about OpenAI’s safety protocols. There are no easy answers here, and I think that’s the point: this is hard, it requires policy, and no one has built that policy yet.


    🔁 ICYMI

    UNICEF’s Tinkering with TechUNICEF Digital Education

    Against the week’s heavier stories, this is worth sitting with. UNICEF’s Tinkering with Tech initiative uses micro:bit devices (or similar) and design thinking to bring hands-on AI and STEM learning to children, with a deliberate focus on girls, and on building skills rather than consuming technology. From mid-2025, AI literacy was added to the programme, which is now expanding from its initial four-country pilot to Lao PDR, Ukraine, and Uzbekistan. The approach is meaningfully different from “here is an AI tool, use it”: students identify a real-world problem their community faces, build something, reflect. Worth bookmarking for anyone thinking about what AI education that centres children, rather than the technology, can look like.


    🔬 What’s new with CAISE

    Last Tuesday I was back at the University of Kent’s Institute of Cyber Security for Society (iCSS), where I did my PhD, talking about children’s use of generative AI. My talk, “Canaries in the AI Coal Mine”, drew on Project CAISE to argue that we need to shift the conversation from what we’re keeping children away from, to what we’re equipping them to deal with.

    The numbers keep moving: over 75% of UK 13-18 year olds are already using generative AI, and in one recent survey, over half had confided something serious to an AI companion. They aren’t waiting for permission. The adults are still arguing about whether to let them in while they’re already through the door.

    I got some excellent tricky questions from the audience, including what happens if your research participants suddenly get banned from a large number of the platforms you’re studying…I guess we will find out if and when it happens!


    What are you seeing in your school, your research, or your own use of AI this week?

    Let me know — or share this with someone thinking about these questions.