We’ve submitted our ethics application!

A cartoon of a woman looking at a document using an oversized magnifying glass. The magnifying glass has a tick in the centre of the glass. The background is abstract. The entire picture has a lilac tone.

There’s a particular kind of energy that comes with hitting ‘submit’ on a document you’ve been working on for weeks – equal parts relief, excitement, and uncertainty about whether you’ve got the balance right.

Over the weekend, we submitted our ethics application for Project CAISE. It’s been months in the making, and it’s one of our first significant milestones.

Submitting an ethics application is always a daunting prospect. It forces you to think – really think – about all the aspects of your plan. What are you really doing? Why are you doing it? Is that really the best way to do it?

Done right, it allows you not only to make sure your “why” is crystal clear, but it also allows you to do a lot of the hard thinking ahead of time: yes, you’re going to interview people – but what, precisely are you going to ask them? You say you’re going to survey people, but with what tool and how do you know it’s GDPR-compliant? You’re going to video record that activity? Cool. How do you square keeping the meaningful visual interactions with the fact you’ve inadvertently recorded everyone’s faces? What even is the university’s infrastructure for storing things for the entire 10 year post-research retention period?!

For a qualitative researcher, a lot of the above comes up every time. One thing that’s really hit me this time around, though: how do you meaningfully assess risk when the technology itself is evolving faster than our understanding of it? CAISE is a long(ish)-term project with 13-14 year olds navigating AI in their everyday lives. Not only do we not know what AI will look like this time next year, but we are aiming for an unvarnished and judgement-free exploration of this to really support open communication and understanding.

In a world where it seems to be ok to produce non-consensual graphic images of people – including children – using AI on social media, and where banning children from spaces rather than working to make those spaces safer, as we would offline, the findings we will have are more important than ever.

The ethics process asks us to anticipate and mitigate risks. But when the risks themselves are emergent and fundamentally uncertain? That’s trickier. We’ve made our best judgments based on current evidence and built in reflexivity, ongoing consent and a series of tiered pastoral and safeguarding processes. But there’s honesty required here too: we’re researching precisely because we don’t fully understand what’s happening yet, no matter how innocuous, or awful, it might be.

We’ll be here with fingers crossed, full of nerves, until we hear back – which won’t be for a while…!

Discover more from Project CAISE

Subscribe now to keep reading and get access to the full archive.

Continue reading