-
Generative AI is good at solving some kinds of problems, and bad at solving others. With the rush to apply AI approaches across the public and private sector, we want to encourage people to use the right tool for the right problem. This blog post proposes a test that makes it easy to understand whether or not the applications are genuinely beneficial for the job in hand.
Generative AI has no concept of truth. It is designed to create outputs that are internally consistent, and this might or might not coincide with true things when the training data and context are well aligned. By now, we’ve all heard examples of false-positive hallucinations, where AI has asserted that something exists or was said because doing so is internally consistent with the question — but which turns out not to be true. Depending on the application, if unchecked, this can have catastrophic effects, meaning that validation of outputs is essential.
How to assess your project for AI suitability
In our recent Shifting Landscapes report, we shared a simple matrix that helps to assess how useful it is to apply an AI approach to any given problem.
It asks how hard/expensive is it currently to produce a solution without AI, and how hard/expensive is to verify that the solution is correct, with four potential outcomes:
Producing a solution is cheap/easy Producing a solution is hard/expensive Verifying the solution is cheap/easy Weak AI benefits (which may increase at scale) Significant AI benefits Verifying the solution is hard/expensive Get a human to do it Break down the verification problem (and repeat) Let’s look at each possible outcome in turn:
1. Weak AI benefits (which may increase at scale)
producing a solution is cheap / verifying the solution is cheapThis applies to tasks where AI tools might help people complete tasks more efficiently, but where the resulting impact or time savings are not significant. Over time/mass use, the benefits might increase.
Examples here include tasks like letter-writing and making summaries of documents or transcripts. If AI can do the initial grunt work, a human can take over and make tweaks to the output, nominally saving some time.
In our own field of civic tech, we can see this kind of tool being used to help people navigate bureaucracy: it might help format letters to representatives, or make effective appeals when FOI requests are refused.
Cheap processes at scale can also unlock new collective benefits. For instance, Muckrock uses LLMs to extract information and success/fail status from individual FOI responses. Doing this manually per request is easy for people, but requires lots of people to do the work to create a useful dataset across the entire corpus. An AI approach drops the costs further, which produces a small benefit on an individual scale, but collectively creates useful data.
As we note in our AI Framework, we have to recognise that a large number of small uses can build up into a negative effect. For instance, AI-created objections to planning applications might overwhelm a system that was built for a world in which there are higher hurdles to lodging an objection.
2. Significant AI benefits
producing a solution is expensive / verifying the solution is cheapIn this scenario, we’re thinking of situations where it is harder for a human to create a credible solution than it is to check if the outputs are valid. Conceiving a solution might be hard because it requires specialised knowledge, such as coding, or significant time and resources, like the analysis of a huge dataset; but it would be easy for a human to see whether or not the solution is working as intended.
One of the biggest practical uses of AI so far has been seen in coding, because coding problems fit so well into this category, and so provide potential benefits. The structure of computer code is often formally checkable (for at least syntax errors), and often there is a relatively short turnaround between “having code” and “checking the code is effective”. This isn’t to say that all coding fits in this box, but enough that a clearly productive set of tools exists.
There are strong potential benefits here because an expensive process can be made cheaper, while the quality of the output can be checked through relatively cheap verification methods.
This segment of applications can be impactful even where access to models is relatively expensive, as a relatively small number of LLM users can have a big impact through the products that emerge.
3. Get a human to do it
producing a solution is cheap / verifying the solution is expensiveSome LLM processes produce outputs that cannot be quickly verified by automatic or human means.
Here, using an LLM for the initial solution might be less effective than having a human do it from the start. While tweaking an email that contains slightly poor wording is a cheap correction, adjusting a multi-page report written by an LLM (involving fact checking, correction, restructure, etc) might be more complicated than just having someone write the original work.
When humans approach a piece of work like this, the production and verification processes pretty much happen at the same time, because the skills required to produce the work are the same ones that suggest the work is valid.
“Use a human” is often most clearly the sensible approach for projects that need a high level of accuracy and confidence in the material produced. For example, we talked to OpenFun about their LawTrace site, which brings together legislative information in Taiwan. They made a point of choosing not to use AI at all in this project. Having accurate information was far more important to users than any convenience AI could introduce.
4. Break down the verification problem
producing a solution is expensive / verifying the solution is expensiveSometimes solutions are expensive for a combination of reasons, and this can justify investment in trying to split the verification problem into smaller problems.
Through a sequence of different checks on LLM output, we can move problems towards being strong uses of AI, because it dramatically reduces the time needed to produce the solution, while the verification costs are manageable.
As an example, our APPG scraper sits in this category. We wanted to get accurate lists of parliamentary group memberships from dozens of different websites. Our original idea was that we would need to use a crowdsourcing approach, because we thought an LLM would be vulnerable to inventing lists of MPs.
But after some consideration, we invested time in a step where we could verify with code whether or the names extracted were actually listed on the relevant sites. We can see a similar example in the public consensus platform Pol.is – where category descriptions are linked back to concrete sources to facilitate easier double checking.
Similarly, you might find that aspects of your problem (if not the whole problem) are appropriate for mechanical checking. Could LLM code make a custom verification process easier? Can a series of automatic/human checks be made more efficient with a clear verification workflow? Each individual improvement moves your project closer to being a potentially strong use of AI.
Investment in the verification process might move the problem closer to having weak/strong AI benefits, where outputs can be derisked through cheap quality checks — but you’ll only know through systematically breaking it down in this way.
We hope that, by sharing this matrix, we will encourage more thoughtful deployments of AI technology in governments and beyond. Please feel free to share it with those who will find it useful.
—
This blog post has been adapted from our report Shifting Landscapes – A practical guide to pro-democratic tech.
-
/RSS FeedFOI Fest was a one-day conference that took place in London on 19th February, 2026. Over the next few weeks, we’ll be putting each session out as a separate podcast.
In this keynote, Warren Seddon, Director of FOI and Transparency at the Information Commissioner’s Office , gave a frank assessment of the current challenges of dealing with an unprecedented backlog of requests.
This session can also be watched as a video, on YouTube.
—
Transcript:
0:00 [Gavin Freeguard] Welcome to FOI Fest 2026!
0:04 It’s my pleasure to welcome to the stage our morning keynote, that is Warren Seddon from the ICO.
0:09 [Warren] Thank you everyone for having me speak today. First of all, I feel like we start with an apology to Alex, who’s been involved in the organising for today, because I wrote my remarks, and then I looked at his email and I realised he sent me a brief for my remarks that talked about being energising and peppy, and you probably didn’t say peppy, and to get the day off on a footing. (more…)
-
In July 2024, we published our AI Framework, a first attempt at setting guidelines on how we utilise AI in our work at mySociety. The basic principles running through that framework can be summed up as “Use AI responsibly, and only when, all things considered, it is the best tool for the job”.
That position will serve us, and any organisation, for life — but, with such a fast-moving field and such speedy integration across so many of our areas of work, the consideration of ‘all things’ must happen at regular intervals. It’s important that we keep checking in, to ensure that the framework is still providing timely and relevant guidance that reflects the current circumstances.
With this in mind, we’ve implemented a three-pronged approach:
- The AI Framework itself: this basic set of principles and questions is designed to act as a resource that staff members can keep referring back to if in doubt — and is a living document which we regularly update to reflect these fast-moving times.
- Our AI register: an internal spreadsheet, where staff are required to note new uses of AI, so that we have an up to date picture of how it is being deployed in areas across the organisation. This has already collected diverse use cases, from generating the transcripts of our podcasts with third party AI-based tools, to our own experiments with LLMs, like detecting inappropriate use of WhatDoTheyKnow. Where we think there are learnings for others to benefit from, we’ve written up these use cases here on our blog.
- A regular ‘Generative AI and Machine Learning’ meeting: this gives us the opportunity to discuss entries on the register, ask questions and consider any issues that have arisen.
So, with all this analysis going on, what are the changes we’ve seen since the first iteration of the framework? As you might imagine, they’re far-reaching, both in our own practices and in the wider world.
In our development: There’s been increasing experimentation and use of coding agent tools from our developers, always with an eye to whether these are producing valid outputs that genuinely save time, or solve problems that other tools couldn’t.
Some of the datasets we create now rely on LLM processes, where flexibility in interpreting and transforming language helps us create and combine data in ways not possible through other methods. These include the collection of data on APPGs; and our WriteToThem Insights work.
We’ve experimented with some AI-based problem-solving on our websites and infrastructure: for example, screening for personal immigration requests that have mistakenly been submitted on WhatDoTheyKnow (a longstanding issue); and we are in the early stages of exploring machine learning approaches to help us understand potential ways of handling any abusive messages sent through WriteToThem.
As for the external landscape: increasingly, we’re seeing funding opportunities that centre around the responsible deployment of AI for the good of democracy or transparency; and in our international community, via our Communities of Practice and TICTeC, we’re hearing of both more and more innovative application of AI to civic tech tools; and ways of monitoring its use by the state.
—
Image: Steve Johnson
-
The government is making a significant investment into AI in public services, and systems are changing apace.
AI is increasingly being deployed in every department of government, both national and local, and often through systems procured from external contractors.
In a recent article for Public Technology, mySociety’s Chief Executive Louise Crow flags that we urgently need to update our transparency and accountability mechanisms to keep pace with the automation of state decision-making.
This rapid adoption needs scrutiny: not only because significant amounts of money are being spent; but also because we’re looking at a new generation of digital systems in which the rules of operation are, by their very nature, opaque.
To see Louise’s thoughts on what needs to change, and why, as this new technological era unfolds, read the full piece here.
If you find it of interest, you may also wish to watch this recent event at the Institute for Government, The Freedom of Information Act at 25, where Louise was one of six speakers reflecting on the future of transparency in the UK.
—
Image: Alex Socra
-
mySociety was founded on one seismic technological change: the arrival of the internet, bringing radical new possibilities to the ways in which we engage with democracy.
Now we’re seeing a second upheaval, just as potentially explosive: the wide adoption of generative AI and machine learning tools — particular kinds of artificial intelligence — not least by the UK government, who have made a commitment to see AI “mainlined into the veins of the nation”.
From the visible and novel, like ‘AI bot’ MPs; to the hidden and less-interrogated, like the algorithms that drive decision-making around benefits; to the new capabilities around working with large text datasets that we ourselves are experimenting with at mySociety: artificial intelligence is changing the way democracy works.
We’ve been thinking about AI for some time, as have our colleagues around the world — TICTeC 2025 had a strong strand of pro-democracy organisations showcasing how they are using new technologies to hold authorities to account and support public engagement; alongside developers showing the tools that aim to make the government more responsive.
AI is coming to democracy, whether we like it or not. In many places, it’s already here.
But there are implementations in which it can be highly beneficial to us all; and ways in which it can present a clear and present danger to democracy.
It benefits everyone if there is a high level of understanding of both the challenges and the opportunities of AI in government. Democratic decision makers need to understand digital tech in order to legislate effectively around it, to develop and procure it effectively.
This is not just so that they can deliver services more efficiently, but also to ensure that they retain the legitimacy of democratic government by using tech and AI in a way that ensures transparency and accountability, preserves public trust and allows the public to understand and participate in the decisions that affect their lives.
Reflections for our time
Over the next few months, we’ll be sharing our own thoughts and experience — alongside invited guest writers who are thinking about how AI interacts with democratic processes and institutions, and how to make that better — in a series of short pieces.
These will examine the different ways that AI is affecting the things we care about here at mySociety:
- Transparent, informed, responsive democratic institutions
- Politicians and public servants who work for the public interest
- Democratic equality for citizens: equal access to information, representation and voice
- A flourishing civil society ecosystem
- The effective and principled use of digital technologies
- Action from politicians to match the evidence of the climate crisis and the level of public concern
- Better communication between politicians and the public, creating space for climate action.
Stay informed
If you’d like to get updates in your inbox, make sure you’ve checked ‘artificial intelligence’ as an interest on our newsletter sign-up form (if you already receive our newsletters, don’t worry – so long as you use the same email address, this will just update your preferences. Just make sure you’ve ticked everything you’re interested in).
By also completing the ‘how do you identify yourself’ section, you’ll help us send you the most relevant material: that means guidance if you work in government or build tech; data and our analysis if you’re a researcher; tools for holding authorities to account if you are an individual or work in civil society, and so on.
—
Image: Adi Goldstein
-
Often, responses published on our Freedom of Information site WhatDoTheyKnow result in newspaper stories, or feed into campaigns or research.
When this happens with one of your own requests, you can add a link to the page. These then appear in the side column, like this:
It’s a great way for other users of the site to see the direct results that come from the simple act of making an FOI request — and now we’ve also added an ‘FOI in Action’ page, where you can see all of them in one place.
Here are five stories that have caught our eye from that page:
- A request for all communications around Eric Trump’s March 2025 visit to Edinburgh allowed the public to see the briefings made to the First Minister of Scotland ahead of their meeting — and resulted in this national news story.
- Minutes from the Ministry of Justice’s Working Group on Unregistered Marriages, acquired via this request, fed into a chapter of research on many aspects of modern marriage, this one being on unregistered Muslim marriages.
- All evidence points to this response being the basis for the New York Times piece [paywalled] that broke the massive story of the government’s £2.4 million expenditure to hide a life-or-death data breach, concerning Afghans who worked with the British forces.
- A 2022 report into misogyny in the British Army was not released until requested and then pursued via the user’s right to an internal review. The user knew of its existence thanks to previous news stories referring to it. The Byline Times reveals the report’s shocking findings in this news story.
- This 2019 report from The Bureau of Investigation looked into public sector adoption of algorithmic and data-driven systems, presciently foreseeing the explosive adoption of AI in our public services. This was based on several requests from a single user.
We’re not far off listing 3,000 citations on WhatDoTheyKnow — and these are just the ones users have added. If your request resulted in a piece of journalism, informed a campaign or fed into research, do add it in. As well as helping to show others what FOI can do, it provides a significant link back to the external site, helping bring it more readers.
—
Image: Peter Lawrence
-
/RSS FeedAI and automated decision-making technologies are increasingly being used in government, and due to their opaque nature, it’s vital that we bring more transparency to their workings. In this event, three researchers and civil society actors talk about how they have used Freedom of Information to do just that.
You’ll hear from Morgan Currie from the University of Edinburgh; Gabriel Geiger of Lighthouse Reports, and Jake Hurfurt from Big Brother Watch. Learn what concerns them about this new age of automated decision-making; the practical tips and techniques they’ve used to bring hidden algorithms to light; and what needs to change in our laws as a matter of urgency.
—
More information
- Blog post, with links to the video and slides
- Morgan Currie’s research (with Alli Spring): Algorithmic Transparency in the UK
- Lighthouse Reports’ Suspicion Machines, as presented by Gabriel Geiger
- Big Brother Watch’s report on the ‘error-riddled AI tool to be used by the Home Office’.
- Find out more about the Access to Information Network
Transcript
Louise Crow 0:03
Hello, everyone, welcome. I’m Louise Crow, Chief Executive mySociety.Louise Crow 0:08
Thank you for joining us for this one hour session on how Access to Information can help us understand AI decision making in government. (more…) -
If you were one of the 100+ people who joined us for today’s webinar, you’ll already know it was hugely informative and timely.
We packed three fascinating speakers into the course of one hour-long session on using FOI to understand AI-based decision making by public authorities. Each brought so many insights that, even if you were there, you may wish to watch it all over again.
Fortunately, you can! We’ve uploaded the video to YouTube, and you can also access Morgan’s slides on Google Slides, here and Jake’s as a PDF, here (Jake actually wasn’t able to display his slides, so this gives you the chance to view them alongside his presentation, should you wish).
Morgan Currie of the University of Edinburgh kicked things off with a look at her research ‘Algorithmic Accountability in the UK’, and especially how opaque the Department of Work and Pensions (DWP)’s use of automation for fraud detection has been, over the years.
Morgan explains the techniques used to gain more scrutiny of these decision-making and risk assessment processes, with much of the research based on analysing FOI requests made by others on WhatDoTheyKnow, which of course are public for everyone to see.
Secondly, in a pre-recorded session, Gabriel Geiger from Lighthouse Reports gave an overview of their Suspicion Machines Investigation which delves into the use of AI across different European welfare systems. Shockingly, but sadly not surprisingly, the investigation found code that was predicting which recipients of benefits are most likely to be committing fraud, with an inbuilt bias against minoritised people, women and parents — multiplied for anyone who falls into more than one of those categories.
Gabriel also outlined a useful three-tiered approach to this type of investigation, which others will be able to learn from when instigating similar research projects.
Our third speaker was Jake Hurfurt of Big Brother Watch, who spoke of the decreasing transparency of our public bodies when it comes to AI-based systems, and the root causes of it: a lack of technical expertise among smaller authorities and the contracting of technology from private suppliers. Jake was in equal parts eloquent and fear-inducing about what this means for individuals who want to understand the decisions that have been made about them, and hold authorities accountable — but he also has concrete suggestions as to how the law must be reformed to reflect the times we live in.
The session rounded off with a brief opportunity to ask questions, which you can also watch in the video.
Presented in collaboration with our fellow transparency organisations AccessInfo Europe and Frag Den Staat, this session was an output of the ATI Community of Practice.
—
Image: Michael Dziedzic
-
TICTeC, our Impacts of Civic Technology conference, has been running since 2015. Over the years, we’ve seen shifts within both tech and democracy that have been reflected as priority topics: from the foundational (and evergreen) question of ‘how can you assess the value of civic technology if you don’t measure its impacts?’, to the rise of authoritarian ‘strong man’ leaders across the world, to a surge of enthusiasm for what blockchain can do around civic tech.
As each of these topics rise to the top of the civic tech community consciousness, TICTeC has provided a natural place to air questions, concerns and solutions.
This year, of course, the foundation-shaking issue is AI. Compared to 2024, when the technology was just beginning to be applied in our field, there’s been a maturing of the discussion, and much more concrete engagement with both the opportunities and the challenges that AI brings around government, truth, trust and delivery.
Our job is to make sure we steer towards the good — or, to phrase it in alignment with mySociety’s own aims, to examine how to engage critically and transparently with AI to create a fair and safe society.
AI across TICTeC 2025
The theme of AI was woven through the conference: where it wasn’t the primary topic itself, it coloured our thinking and had relevance everywhere.
Sessions dealing primarily with AI could be divided into three broad angles:
- Since AI is already making inroads into governance systems, how can we ensure it is used well?
- How have AI’s capabilities been harnessed to make civic tech tools, improve functionality or increase efficiency, and how’s that going?
- Can tools counter the problems that AI presents around truth and trust?
Let’s look at each of these in turn.
AI and democratic governance
Both of our keynote speakers were keen to point out the need for oversight and citizen participation as AI is rapidly adopted across government systems.
Marietje Schaake, whose presentation you can rewatch here, warned of the dangers of private tech firms holding more power than our constitutional democracies, thanks to the limitless profits to be made from this new technology; while Fernanda Campagnucci (presentation here) advocated for citizens to be allowed into the decision-making processes not just around governance itself, but in the making of the tools that facilitate it.
We also heard from the people at the frontline of governance. An instructive session from Westminster Foundation For Democracy and the Hellenic Parliament (not recorded) quizzed participants on how comfortable they would be in easing the administrative burden of parliaments by allowing AI to help categorise, filter and even answer letters from citizens. Would our opinion change if we knew, for example, that there was a backlog of 40,000 messages to representatives?
In a session deeply rooted in the realities of running a local authority during a period of tech acceleration, Manchester City Council explained that in a city where 450,000 people don’t even use the internet, it is crucial to ensure AI is being used ethically and to communicate how it affects citizens’ lives: “Whether or not you choose to interact with AI there’s no way of opting out – AI based decision making is happening around you.”
Three speakers from the Civic Tech Field Guide laid out the case for audits on how AI is being used in your own community, showing how anyone can do it, and Felix Sieker from Bertelsmann Stiftung made a strong argument for public AI, with proper accountability and democratic oversight, rather than the power being concentrated in a handful of private firms — something that is already being developed in several different forms, including by Mozilla.
MIT GOV/LAB ran a workshop (not recorded) in which we could chat with a simulation of a person from the future about the effects of a climate policy, then decide whether or not we would implement that policy once we had a human account of its results. This is part of ongoing research into helping to break deadlocks in policy decision-making.
How AI is already being used in civic tech
Both Code for Pakistan and Tainan Sprout showed how they’ve deployed AI to allow citizens to query dense policy documentation and get answers that are easy to understand
Demos talked about the work they’ve been doing around a new AI-powered digital deliberation process called Waves, hoping to ‘do democracy differently’ in our current crisis of mistrust.
Dealing with AI and misinformation
Camino Rojo from Google Spain showcased new tools, some of which are shortly to be rolled out, to help counteract misinformation. In particular, these allow users to check whether or not media displayed in search results was artificially generated. At the moment, the onus lies with the image generator to provide this information. Strict guidelines apply, in particular, to those advertising around sensitive areas such as elections.
AI and mySociety
In the final session of the conference, we presented the various ways that we’ve been exploring how AI can support mySociety’s work. You can rewatch this session in full here.
We have been guided by our own AI framework, in which we set out the six ethical principles by which we adhere when adopting this (or any) new technology. In essence, these can be boiled down to the single sentence: “We should use AI solutions when they are the best way of solving significant problems, are compatible with our wider ethical principles and reputation, and can be sustainably integrated into our work.”
In other words, we are not working backwards from the existence of AI to see what we could do with it, but approaching from the question of what we want to achieve, and then examining whether AI would aid us to do so more efficiently.
In this session you can discover how we’ve used AI to more effectively deal with problems in bulk, and make information easier for everyone to access across our work in Transparency; hear thoughts on how, for our work in Democracy, and especially the recent WhoFundsThem project, we’ve found that a human approach is sometimes needed — but that there are some tasks that AI can make easier here.
For the future we’re thinking about AI as it might apply to WriteToThem not to burden representatives with more mail, but perhaps communications of a higher quality.
Overall, we’re keeping a wary eye open for how AI will almost certainly be (and already is?) muddying the ability to trust the provenance of information — especially given that mySociety is essentially a ‘resupplier’ of data from public authorities and Parliament.
In a LinkedIn post, our Democracy Lead Alex got at the core of the challenges ahead of us all in the civic tech field, when he said: “Different kinds of technologies make different kinds of futures easier – and what we’re trying to do with pro-democratic tech is to make democratic futures easier. But the opposite is obviously [possible], and AI has arrived at the right time to merge aesthetically and ideologically with authoritarian regimes.
“A core to the spirit of civic tech is persuasion by demonstration – and to me TICTeC is a wonderful distillation of that spirit of both imagining better things, and doing the work to show what’s possible.”
And on that thought, we will roll up our sleeves and work towards the version of the future that is better for everyone.
—
We’re leading the conversation on AI and democratic decision making —
and we need your help.
mySociety was founded more than two decades ago to help democratic governance deliver on the raised expectations of the internet era.
We are in a period in which the relationship between tech and government is more entangled and fraught than ever. We’re stepping up, but we can only do so with your support. Please do consider making a donation.
-
TICTeC is wrapped up for another year. The roller banners are stowed away, the lanyards saved for next year, and now we’re back home from Belgium with memories, insights and enough hope to keep us going ’til next time.
It’s always energising to come together with the global civic tech community and share everything we’ve learned. We had attendees from 34 countries, bringing together their experiences — and judging by the comments we’re seeing, we’re not the only ones to have found it both enjoyable and valuable.
The two days were “fabulous and thought provoking”, allowing for “the exchange of experiences and coordinated actions”, and delegates said they returned “inspired, with new insights on civic tech trends and promising collaboration ideas”.
Perhaps Hendrik Nahr from make.org, summed the whole experience up best when he said, “It felt like a family gathering of the civic tech community from Europe and beyond. I’m grateful for the energy, the open exchanges, and the motivation to keep pushing forward tech for democracy.”
We are grateful too: TICTeC is not just about mySociety creating an open space for such discourse; it also depends on the people who participate and the insights they so generously articulate.
What we talked about
It’s hard to provide a full summary of such a packed event, but fortunately we’ll soon be able to share videos of the majority of the presentations, along with slides and photographs, so you’ll be able to choose what you’d like to see.
The overall theme of the conference was tech to defend and advance democracy, and within that there were strong strands around tech to tackle the climate emergency; citizen participation and deliberation; transparency and access to information… and across everything we heard of the seismic changes to society, to tech and to democracy — both already seen, and expected soon — by the emergence of AI.
To pull out a few high points from so many thought-provoking moments:
Marietje Schaake, delivering her keynote remotely because of last minute train strike issues, still managed to enthrall the auditorium and ignite our two days of conversation with an incisive overview of how big tech is overtaking democratic governance globally, with oversight lagging dangerously behind. We posted a summary on Bluesky in real time, if you can’t wait for the video.
Fernanda Campagnucci‘s day two keynote (summarised here) sliced up the different approaches government can take to citizen participation, from citizens feeding into decision-making processes, to citizens being invited to co-create both the data and the governance systems, featuring a nice story about an elderly lady who grumbled that everyone was talking about APIs (a way for software systems to communicate with one another) at a town meeting but she didn’t know what it meant. Once someone had explained to her, she turned up at every subsequent meeting to request APIs of every department’s output.
Colin Megill used the opportunity provided by TICTeC to launch Pol.is 2.0 to a highly relevant audience. This is a paid version of the open source decision-making platform — the basis of Twitter’s “Community Notes” functionality — which contains a ‘superset’ of new features. Its enhanced LLM capabilities allow it to break sprawling conversations into any number of subtopics, making them easier to moderate and removing blocks to overall consensus that can be created by small sticking points.
Panels brought people together to talk about aspects of parliamentary monitoring and access to information from around the world – discussions we will be continuing through our communities of practice work.
There was a useful session on the importance of, and methods for, measuring impact — after all, TICTEC’s foundational purpose — from OpenUp South Africa, Hungary’s Átlátszónet Foundation and SPOON Netherlands.
We wrapped up the conference with an examination of how mySociety is navigating AI in recent and near future work, and an open forum about how TICTeC can evolve and continue to be useful to the global civic tech community.
We presented how we’re thinking about, utilising and navigating both the positives and potential dangers of AI. Such considerations are also preoccupations for others in our field: several organisations are experimenting with AI to achieve or work more efficiently toward their pro-democracy aims; others are foreseeing problems that AI may bring, from amplifying misinformation to algorithm-based decisions that affect individuals’ lives.
There wasn’t an organisation at TICTeC that isn’t thinking about AI in one way or another, as evidenced in diverse sessions across the entire conference. There’s a sense that the conversation has matured from last year, moving on from hype to clear engagement on practical uses, and for scrutiny of both model creators and government uses. We’ll write more about this in a separate post.
And also, watch this space for videos and photos from TICTeC 2025, which we’ll share as soon as they’re ready. That should keep us all going until next year.