To react appropriately to the emergence of AI, we need to understand it. We’re making our internal AI Framework public as a way of being transparent about the kind of questions we’re asking ourselves about using AI in mySociety’s tools and services.
At our recent TICTeC conference, there were several great examples of how generative AI approaches can be applied to civic tech problems. But regardless of whether civic tech projects use AI approaches directly, it’s increasingly part of the tools we use, and the context our services exist in is being changed by it.
A key way mySociety works is by applying relatively mature technology (like sending emails) in interesting ways to societal problems (reporting problems to the right level of government; transforming Parliamentary publishing; building a massive archive of Freedom of Information requests, etc). This informs how we adapt and advance our technical approach – we want to have clear eyes on the problems we want to solve rather than the tools we want to use.
In this respect, generative AI is something new, but also something familiar. It’s a tool: it’s good at some things, not good at other things — and, as with other transformative tech we’ve lived through, we need to understand it and develop new skills to understand how to correctly apply it to the problems we’re trying to solve.
We currently have some funding from the Patrick J. McGovern Foundation where we’re exploring how new and old approaches can be applied to specific problems in our long running services. Across our different streams of work, we’ve been doing experiments and making practical use of generative AI tools, working with others to understand the potential, and thinking about the implications of integrating a new kind of technology into our work.
Our basic answer to “when should we use AI?” is straightforward. We should use AI solutions when they are the best way of solving problems, are compatible with our wider ethical principles and reputation, and can be sustainably integrated into our work.
Breaking this down further led us to questions in six different domains:
- Practical – does it solve a real problem for us or our users?
- Societal – does it plausibly result in the kind of social change we want, and have we mitigated change we don’t want?
- Legal/ethical – does our use of the tools match up to our wider standards and obligations?
- Reputational – does using this harm how others view us or our services?
- Infrastructural – have we properly considered the costs and benefits over time?
- Environmental – have we specifically accounted for environmental costs?
You can read the full document to see how we break this down further; but this is consciously a discussion starter rather than a checklist.
Publishing this framework is similarly meant to be a start to a discussion — and an anchor around open discussion of what we’ve been learning from our internal experiments.
We want to write a bit more in the open about the experiments we’ve been doing, where we see potential, where we see concerns. But this is all just part of the question at the root of our work: how can we use technology as a lever to help people to take part in and change society.
—
Image: Eric Krull