This week: AI in schools, academic integrity, and the audit trail that changes how you write.
🔍 This week I’ve been thinking about…
A report out this week finds that 71% of UK higher education students now use AI for their studies. Three quarters of them are anxious about being accused of using it.
Those two numbers, sitting next to each other, describe a problem. It is not the problem most institutions think they have.
The standard response to AI in education has taken one form: detection. Plagiarism tools have added AI detection modules. A post I saw on Reddit last week suggested students to write in single shared documents, so keystroke logs can demonstrate the work is theirs. This is an audit trail: show your working, prove it was you.
There is a structural problem with this. Writers do not write linearly. Academics do not write linearly. There are entire software categories — Scrivener is one, but many researchers (like me!) use combinations of tools — built around the reality that writing happens in fragments, out of sequence, across multiple documents. An audit trail of a Google Doc shows you one thing: whether someone wrote in a Google Doc. A pasted block of text is not the smoking AI gun some may want it to be.
The deeper problem is that AI detection flags good writing. Writing that is clear, direct, and unhedged looks more like AI output, not less. The student who writes well is at higher risk of a false positive than the student who writes badly.
The effect on students is measurable. Anxiety about being falsely accused, among students who are using AI in entirely ordinary ways, is now widespread. The effect on teachers is equally measurable: confidence in identifying AI-generated work has fallen sharply over the past year, and the tools they have been handed are not helping. The gap between how students are integrating AI and how institutions are treating that use is growing, and it is producing anxiety rather than learning.
Most of the students in these surveys are not trying to game anything. Some are overwhelmed. Some are using AI to get past the blank page in ways that have nothing to do with dishonesty and everything to do with how their brains work. The audit trail catches them all equally.
📰 Three things worth your attention
1. UK student AI use hits 71% as 75% face AI detector anxiety — FE News
This article reports on a Studiosity/YouGov report: 2026 AI and Wellbeing in Education. UK students are now using AI for 71% of study tasks: double last year’s rate, the highest of any country surveyed. Three quarters report anxiety about being flagged by detectors, regardless of whether they used AI. Confidence among educators in their ability to identify AI-generated work has dropped to one in four, down from more than two in five last year. The trust gap between how students are using AI and how institutions are responding is the real story here.
2. Trump’s AI framework targets state laws, shifts child safety burden to parents — TechCrunch
The US administration put out a federal-level framework for a single AI policy last week. It quite nakedly pre-empts state-level regulation and pro-industry. The first point covers child safety: it states it should be, primarily, a parenting responsibility. Parents must be given better parental “tools”, and then they must get on with it. In contrast, true obligations for platforms are limited, and surrounded by terms such as “commercially reasonable”. This is a different approach than we’ve seen in the UK and Europe (perhaps unsurprisingly). Platforms have duties, not just options. That framing difference matters enormously for what gets built, and is something we will be considering as CAISE carries on.
3. Australia’s under-16 social media ban: what the teenagers actually say — BBC News
A short BBC video puts the question directly to Australian teenage girls: has the ban changed anything? The answer, largely, is no. Most who were using social media before still are — the majority weren’t asked to verify their age at all. Those who weren’t on it before still aren’t. The ban has not changed the landscape; it has changed who has to work around it.
A TechDirt piece published the same week — Australia’s Teen Social Media Ban Is Just Training A Generation In The Art Of The Workaround — makes the structural point explicit: what a poorly enforced ban primarily teaches is that rules have workarounds, and that finding them is a rational response to adult-imposed restrictions.
🔁 ICYMI
How kids are actually using AI — BBC Future / Pew Research Center / Common Sense Media
Two new surveys — from Pew Research Center and Common Sense Media — asked US teenagers and their parents about AI, and then compared the answers. The gap is striking. Only 51% of parents believe their child uses AI. The actual figure from the teenagers: 64%. Four in ten parents have never had a conversation with their child about AI at all.
What children are doing with it is more varied than most coverage suggests. The most common use is simply looking things up. Homework help comes second. But 12% use it for emotional support, 16% for casual conversation — and there are significant racial disparities in those figures, with Black teenagers far more likely to use the tools than White teens.
The attitudinal gap is just as wide. 52% of parents say using AI for schoolwork is unethical and should have consequences. Exactly 52% of teenagers say it is innovative and should be encouraged. One of those groups is out of step with where things are going. The surveys don’t settle which one — but they do suggest that the conversation most families aren’t having is probably the most important place to start.
🔬 What’s new with CAISE
Expedited ethics has been submitted for the short research exercise on the government’s social media consultation — looking at how young people actually engage with the survey, what they find easy or hard, and whether it lets them say what they want to say. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, get in touch.
→ What are you seeing in your school, your research, or your own use of AI this week?
Let me know — or share this with someone who is trying to figure it out.



