This week: social media bans, sovereign AI, and a government that wants children to say technology is bad while betting half a billion pounds that it isn’t.
🔍 This week I’ve been thinking about…
Last week, the Prime Minister summoned senior leaders from Meta, TikTok, X, Snap and Google to Downing Street and told them that a ban on children using their platforms would be preferable to a world where harm is the price of social media use. The government already has the powers to act. It is waiting for its Growing Up in the Online World consultation to close on 26 May.
That same evening, the Technology Secretary launched the £500m Sovereign AI fund, calling for the UK to be “an AI maker, not just an AI taker.”
I have been writing about this consultation since Issue 3, when I looked at its design: the parental login barriers, the single age bracket covering 10 to 21-year-olds, the leading questions, the lack of a save or back button. The problems I raised then were about access and design. The problems now are bigger.
The consultation is under legal challenge. Two fathers are preparing a High Court action after it emerged that the government’s survey contractor will use Amazon and Microsoft AI to process the responses. The tools being used to summarise public views on technology regulation are built by companies that stand to be regulated by the outcome.
And then there is the question of what children are actually being asked. The consultation’s questions about AI chatbots are, roughly: tell us how they are bad for you. Not how you use them. Not what you get from them. Not what you would change. Just: how are they bad. Children are being invited to participate in a process that has already decided what shape their answers should take.
Set that against the Sovereign AI fund announcement and the contradiction becomes structural, not just optical. The government is asking children to confirm that technology harms them. It is simultaneously investing half a billion pounds in the premise that technology is the country’s future. Children are being told they are part of the conversation. They are not being given the conversation that matters.
Regular readers will know that a recurring theme in this newsletter is who gets asked, what they get asked, and whether the answers are allowed to go anywhere uncomfortable. This week is a case study. The consultation closes in five weeks. The direction of travel was set before it opened.
📰 Three things worth your attention
1. Starmer summons social media bosses to Downing Street and threatens a ban — GOV.UK / The Scotsman
Senior leaders from Meta, TikTok, X, Snap and Google were called to Downing Street on Thursday and told that a ban on children accessing their platforms remains on the table. The government has secured the powers to act once the Growing Up in the Online World consultation closes on 26 May. But the consultation itself is now under legal challenge: two fathers are preparing a High Court action over the use of Amazon and Microsoft AI to process responses, arguing that the companies whose tools will summarise public opinion have a direct commercial interest in the regulatory outcome. Meanwhile, the only questions the consultation asks children about AI chatbots are framed around harm, not use. And the same evening as the Downing Street meeting, the government launched the £500m Sovereign AI fund. The messaging to children: technology is dangerous. The messaging to industry: technology is the future. Both cannot be the whole story.
2. Australia’s under-16 social media ban continues to not work — The Record / The Guardian
A Molly Rose Foundation study of over 1,000 Australian children has found that 61% of 12-to-15-year-olds can still access their social media accounts four months into the ban. Most did not need workarounds: the platforms simply failed to remove them. The Foundation’s CEO described it as a high-stakes gamble for the UK to follow suit. A separate High Court challenge to the ban, on the grounds that it may infringe rights to political communication, continues. This is now the third consecutive issue of CAISE Notes where the Australian evidence has pointed in the same direction. The ban has not changed the landscape. It has changed who is expected to work around it.
3. A father’s weeks-long nightmare after his teen’s Discord account was hacked — Ars Technica
A father spent weeks trying to regain control of his 13-year-old daughter’s Discord account after it was hacked. She had signed up at 12, lying about her age as children routinely do. Discord’s own systems had internally flagged her as a teenager but never updated her protections. The hacker used her account to target 38 of her friends. Discord’s support chatbot kept auto-closing the father’s tickets. It took a journalist intervening to get action. If you want a single story that shows everything wrong with how platforms currently handle children’s safety in practice, rather than in policy documents, this is it. The protections existed on paper. None of them worked when they were needed.
🔁 ICYMI
AI in Career Guidance: A Review of Evidence and Practice — Nuffield Foundation / Ada Lovelace Institute
AI tools are already being used to match young people with career pathways, from automated CV screening to chatbot-based career exploration. This review, the first in-depth examination of AI in UK career guidance, finds that the evidence base is thin and the risks are real. Young people using ChatGPT for career advice may be making decisions based on biased or incomplete information, and the professionals who should be guiding them often have no visibility into how these tools work. Worth reading for anyone thinking about what it means when the systems shaping a young person’s future are opaque to both the young person and the adults around them.
🔬 What’s new with CAISE
Big week. Ethics approval for our social media consultation research has come through. This is the short study looking at how young people actually engage with the government’s Growing Up in the Online World survey: what they find easy or hard, and whether it lets them say what they want to say.
If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, please get in touch.
And some good news on the academic side: a short literature review submitted to ACM’s Interaction Design and Children (IDC) conference has been accepted. More details to follow once we can share publicly.
→ What are you seeing in your school, your research, or your own use of AI this week?
Let me know, or share this with someone who is trying to figure it out.


