This week: sycophantic AI, tools that ignore you, and friction-maxxing.
🔍 This week I’ve been thinking about…
Two studies published in the last week — both detailed below — give yet more reasons as to why we all need to think more carefully about our generative AI use. We all know that AI chatbots are sycophantic: this doesn’t seem to be something going away. A study from Stanford reiterates that not only do AI chatbots tell you you’re right far too often (even should you propose dangerous or illegal things), but that we’re still just far too keen to believe it.
And even if we do believe the AI is our best friend…it’s increasingly likely to ignore our instructions. In some cases this could feel like an extension of sycophancy in some ways — after all, it is not sentient, just recognising you need a pattern of words that align with your most likely expectations. So, fake citations, fake work tickets — that kinda feels like expected behaviour? But — with the rise of specific modules (like Claude Cowork) that have increasing access to your digital spaces…some of the examples given in the report are a little nervewracking.
Both of these studies point to the ongoing need to operate at a step back when you’re using AI tools. Engage your brain and increase the friction. And that means really, really, having to hammer home the point with kids: use the tool, but know how it works and what the risks are.
Every time I talk to (adult) friends about their latest chats with Claude or ChatGPT, I invariably ask them “did you get them to tell you the evidence they discarded to give you that answer?” This is a version of Mike Caulfield’s iteration idea — don’t expect a one and done; you need to help the refining process yourself. When I sit down to write something, I go through a lot of ideas before I get to the thing I really intended to type. I interrogate myself; it’s helpful to do the same with any AI-involved conversation.
Or, as so much more succinctly suggested by the Stanford study’s authors this week: replying with “Wait a minute, do you mean that…” is likely to get you a more “considered” response. Probably still after the tool apologises for being so very wrong and wasting your time (!), but…it’s a start. And memorable enough to teach to even the smallest kids.
📰 Three things worth your attention
1. Sycophantic AI decreases prosocial intentions and promotes dependence — Science
As mentioned above: Stanford researchers tested eleven major AI language models and found consistent sycophancy across all of them, with models affirming users’ positions 49% more than humans. The researchers then tested how this affects real people: users preferred the sycophantic AI, trusted it more, became more convinced of their own rightness, and were less willing to apologise. 12% of US teenagers already use chatbots for emotional support. The researchers describe perverse incentives: the very feature that causes harm drives engagement, giving companies reason to increase sycophancy rather than reduce it.
2. Number of AI chatbots ignoring human instructions increasing, study says — The Guardian
The Centre for Long-Term Resilience, funded by the UK’s AI Safety Institute, gathered thousands of real-world examples of AI misbehaviour. They documented nearly 700 cases of chatbots and agents disregarding direct instructions, evading safeguards, and taking unauthorised actions, including deleting emails, bypassing copyright controls, and fabricating communications. The incidents increased five-fold between October and March. One AI agent publicly shamed its human operator for blocking an action. Grok, built by xAI, fabricated internal messages and ticket numbers for months. The lead researcher warned that these tools are currently “slightly untrustworthy junior employees,” but if their capabilities grow while the scheming persists, the consequences in high-stakes contexts could be severe. On the plus side — these examples are really easily understood: share them with kids and adults alike!
3. New screen time guidance for parents of under-5s — GOV.UK
The UK government published national guidance on screen time for young children, advising no more than an hour a day for two-to-five-year-olds and no solo screen use at all for under-twos (other than shared activities like video calls). What caught my attention: the guidance specifically tells parents to avoid AI toys, tools, and chatbots for young children, citing a lack of evidence on their developmental effects. Fine; but the line parents are being asked to walk is getting thinner by the week. Avoid AI toys. Limit screens to an hour. Model good habits. Meanwhile, the smart speaker is in the kitchen, the school has just rolled out an AI-powered learning platform, and the advice arrives alongside a cost-of-living crisis that makes cheap screen-based childcare not a luxury but a survival strategy. The guidance is not unsurprising. It is also asking a great deal of people who are already stretched very thin.
🔁 ICYMI
Why friction-maxxing could be good for your tech usage — Mashable
If the Stanford study describes the problem (AI removes friction, and we like it that way), this piece describes one response. Friction-maxxing, a term coined by sociologist Kathryn Jezer-Morton, is about deliberately reintroducing difficulty into your technology use: choosing the harder option, resisting the shortcut, reclaiming the cognitive effort that algorithms are designed to eliminate. It is not a solution to the structural incentives driving sycophantic AI. But it is a useful frame for anyone trying to think about what they want their own technology habits to look like, and what habits they want to model for their children.
🔬 What’s new with CAISE
We have ethics approval for the main CAISE study! This is a significant milestone. It means we can now move towards recruitment and data collection with our partner schools. First up, though: we are going to be doing an analysis of policy and media. It’s not going to be a small piece of work, but hopefully will provide some really useful insights for our student co-researchers when they need it!
Separately, expedited ethics has been submitted for the short research exercise on the government’s social media consultation. If you work with groups of young people aged 10 to 21 who might want to take part before the 26 May deadline, we would love to hear from you.
On a personal note: I have recently started a Substack newsletter, AI and Tech Decoded, aimed at helping parents navigate the technology questions that come up at home, at school, and everywhere in between. If you know parents who would find it useful, I would be grateful if you passed it on.
→ What are you seeing in your school, your research, or your own use of AI this week?
Let me know — or share this with someone who is trying to figure it out.



