1. What we’ve learned about building datasets with FOI

    At the beginning of the year, we set ourselves an ambitious goal: to help a group of small organisations working with marginalised communities to run Freedom of Information–based campaigns using WhatDoTheyKnow Pro’s batch-request and project features. We recruited groups working in areas as varied as domestic abuse, arts funding, youth health, SEND provision, parental leave, fuel poverty, and migrant justice. 

    As the year draws to a close, we’re reflecting on the project and the lessons we’ve learned from it. It’s been a total privilege working closely with these organisations, because it gave us a front-row view of the real challenges of frontline campaigning and community support. 

    What became clear early on was that the hardest part of a batch-request project isn’t actually pressing “send”.  Campaigners know their issues intimately, but FOI requires a specific kind of precision: pinning down exactly what data will answer their question, what format it should be in, and which public bodies actually hold it. Moving from “we want to understand this issue” to “we need these five questions answered from these 150 authorities” is a surprisingly big leap. 

    Luckily, WhatDoTheyKnow’s knowledgeable  volunteers were able to help our groups go from vague policy areas to precise questions, and to understand what information was already out there. One of our groups didn’t end up submitting a big batch request, as in the course of their preparatory research they found an already-published dataset from an industry body they didn’t know existed. This is still a win — proactive publication by authorities makes everyone’s life easier.

    In the cases where we had good questions and had identified the right authorities, we then still had to tackle the practical reality: for small teams already stretched thin, a large FOI project which asks a lot of questions requires capacity to deal with the answers. These can come in a diversity of forms: follow-ups, clarifications, refusals, delays, internal reviews. Our Projects tool helps to make dealing with the range of responses easier, but the scale of the challenge can still require serious commitment of time and resources. Zarino shared his experience of this on our blog back in October.

    Just this week we had a moment that illustrates this: one of the groups we were supporting sent a batch FOI request to 133 universities on 5 July. As I write this in December, they are still receiving responses. The most recent one, a refusal, arrived five months after the original request! 

    We’ve got two strands of thought here. On one hand, it’s good to be realistic. Although these moments are frustrating, they also teach us to be prepared for slow, unpredictable timelines, and that persistence is part of the craft. On the other hand, we feel strongly that citizens shouldn’t have to be quite so persistent, that pace shouldn’t be quite so slow, nor unpredictable. That’s why we’re advocating for upstream policy improvements, such as in our recent evidence to the Scottish Parliament, and in our upcoming FOI Fest conference.

    Although it’s not always been straightforward, this year reinforced why FOI is worth the effort. A particularly strong example came from SCALP and Netpol’s From Scotland to Gaza report, which, with our help, used batch FOI requests to uncover policing practices around protests. Their methodical approach combined data from public bodies with testimonies to make a compelling case that has shaped media coverage and public debate. It’s a reminder that FOI doesn’t just extract information, it empowers communities to speak with confidence.

    All of this left us with a clearer sense of what we can do in future to help make big FOI projects work. A few lessons stood out:

    • Start smaller: a 10-authority pilot builds confidence and tests the strength of the question.
    • Co-design the requests: working together on wording and structure reduces uncertainty: the organisations have expertise of their area, while our volunteers have second-to-none understanding of how to write a clear request.
    • Prepare organisations for the long tail: follow-ups, delays, and refusals are, unfortunately, to be expected, not signs of failure of the project. 
    • Volunteers can help with the volume of work: Climate Emergency UK have set the standard for how to train, empower and mobilise the cohorts they need to churn through large quantities of data.
    • See FOI as a strategic, not administrative tool: it’s most useful when tied directly to campaign goals.

    We fundamentally believe that every organisation can benefit from FOI; they just need the right scaffolding and resources. If you know what you’re in for, the whole process becomes far less intimidating.

    What next? We’re refining our approach, watching what happens with our initial batch of projects, and constantly updating our guides and help pages to support our users in their big and small FOI projects. Every request is a small act of collective muscle-building. We’re excited to keep learning and keep improving the support that makes those acts possible.

    Photo by Danist Soh on Unsplash

  2. New research report: Supporting good communication

    With WriteToThem.com we want to run a service that helps people write the right message to the right place. That means helping users express themselves effectively and keeping the service a constructive channel between constituents and representatives by deterring abusive messages.

    Abuse and intimidation aimed at elected representatives does not just harm the person receiving it. It corrodes the openness and trust that democratic culture needs, and it can deter people (especially those from under-represented groups) from taking part in public life at all. 

    We think we’re in a good position to play a constructive role in this area. One problem that has been raised is frustration at bouncing around layers of government, where a key benefit of WriteToThem is getting people to the right layer first. But we need to go further than that to understand how we can discourage abusive messages – both to directly implement approaches, and to trial patterns that could be implemented by a wider range of parliaments and local authorities.

    We’ve been exploring what a “toxicity” risk score would look like in our infrastructure and have released a report of our findings so far. We trialled a range of options — from baseline keyword matching, to Google’s Perspective API, to running lightweight models locally (IBM Granite Guardian), and then to LLM-based grading as a second pass for tricky cases like implicit threats or messages quoting abuse from third parties.

    But having a risk score is less important than how it is used. We’ve mapped out a few different approaches beyond a manual moderation approach – such as soft “nudge” prompts (encouraging people to reconsider wording before sending), cool-down delays for higher-risk messages (without removing someone’s ability to contact their representative), and informative flags for recipients (for example, passing along a risk score or relevant metadata on a message).

    Our next step has mapped out some technical possibilities to talk to more people about which approaches make sense  – which we’ll be doing as part of our wider Welsh Government funded democratic engagement work to improve WriteToThem.

    For more details on the approaches tested, potential issues with different methods of implementation, and unanswered questions, you can read the report online.

    Image: Pawel Czerwinski

  3. New report: WriteToThem Insights

    Understanding more about constituent communication

    We’ve released a new report exploring insights from WriteToThem about the content of constituent communication – you can read the whole report online or a summary below. 

    WriteToThem.com is a long-running mySociety service that enables people across the UK to contact their elected representatives by entering their postcode and sending a message through the site.

    This service provides a unique opportunity to understand the flow of communication between many constituents and many representatives. Our WriteToThem Insights report uses surveys to understand more about what people are writing about. 

     While previous work identified patterns in response rates and deprivation gradients, this experiment focuses on understanding what people are writing about, distinguishing between casework (individual problem-solving) and campaigning (policy-oriented advocacy).

    A new survey and data-processing pipeline were developed to categorise and anonymise message summaries, applying machine learning and large language model techniques to cluster and label topics. Analysis of 5,400 messages from Q3 2025 found:

    • Casework and campaigning form two distinct types of communication, with casework more common for councillors and campaigning dominant for MPs.
    • The deprivation gradients of these two types differ sharply: campaigning is concentrated in less deprived areas, while casework is more evenly distributed, though likely still underrepresents the most deprived groups.
    • First-time users are more likely to send casework messages and to receive responses.
    • Top themes in casework include housing, local services, health, and anti-social behaviour; in campaigning, issues such as Gaza, climate policy, and digital ID predominate.

    This data has limits. This covers only a portion of total correspondence, and with little information about whether the sample is representative enough to generalise to messages sent in general. That said, we think there are strong uses both for improving WriteToThem itself and for informing broader understanding of constituent communication.

    We want to build on this work: refining the analysis process and exploring opportunities to collaborate. We see particular value in digging more into casework data as something that could inform more systematic approaches in this area, helping representatives across the country join up information and improve collective scrutiny of government services.

    The full report can be read here.

    Image: Christopher Burns

  4. Use the Council Climate Action Scorecards to bring change

    Campaigning organisation and lobbying group the RBWM Climate Emergency Coalition (CEC), located in the Royal Borough of Windsor and Maidenhead, have been putting the Council Climate Action Scorecards to really good use.

    Overall, 14 separate groups make up the CEC, who convene with a shared interest in mitigating climate change and/or protecting and restoring nature. The coalition holds the borough council to account in its stated goal to achieve net zero by 2050 and keep within its carbon budget, and we were pleased to hear all about it from someone involved from the very beginning, Paul Hinton.

    Getting things started

    First, he told us how the group had come into being:

    “In March 2019, a local resident organised a series of climate protests at the town hall in Maidenhead, in response to Greta Thunberg’s solo school strikes outside the Swedish parliament and the establishment of the Fridays for Future movement.

    “I attended the first protest with a copy of the Green Party toolkit ‘in my back pocket’, and suggested to some of the others present that we organise a campaign asking the council to declare a climate emergency. That’s how the CEC came into being.

    “The campaign was a success. The climate emergency was declared in June 2019, although with a target of net zero by 2050 rather than the 2030 that we had campaigned for, and the borough’s first environment and climate strategy was adopted in December 2020, to run for five years to the end of 2025.

    “The council is now developing the second version of its environment and climate strategy, to run until the end of 2035. The CEC is working hard to ensure that the new version is as ambitious as possible.”

    Around the same time, Climate Emergency UK was just starting up, initially with the aim of collecting together every UK council’s climate declarations (out of which came CAPE, a joint project from CE UK and mySociety) — so there was an obvious shared interest right away, as Paul explains:

    “Members of the CEC attended the first Climate Emergency Conference in Lancaster in March 2019 when it was a grassroots initiative led by Councillor Kevin Frea. We kept informed about CE UK’s activities, and were very pleased to see the genesis of the 2021 Scorecards, even though we didn’t make as much use of them as we might have.”

    A long term relationship

    There is value in understanding that campaigning for climate action may mean a long-term relationship with your local council. That’s not only because your message might take time to be heard; the campaigners themselves may be learning skills and knowledge. Paul explained that CEC have seen both successes and challenges, due to a number of different factors:

    “The Scorecards have become extremely useful as we have gained experience and a better understanding of how to use them. In 2023, we produced an analysis of the RBWM’s climate performance for the newly elected Liberal Democrat council, based on the second iteration of the Scorecards; but there seemed to be no appetite to revisit the 2020-2025 strategy and the resulting action plan, and sadly this had very little impact.

    “In 2025 we produced another analysis and report, this time based on the 2021, 2023 and 2025 Scorecards. This report has been extremely impactful for two main reasons. First, the analysis was based on three separate Scorecards results, and a clear trend was emerging so our arguments were stronger. Secondly, we shared the analysis more widely so that the message was more difficult to ignore.

    “The report was shared with key cabinet members including the leader and deputy leader of the council. It was also shared with the steering group of the Climate Partnership (CP), a joint council/community organisation set up by the council to further the council’s net zero and nature recovery ambitions in the community. The CP were going to be involved in developing the new environment and climate strategy 2026-2035, and following receipt of the report became fully aware of how the borough’s climate action performance measured up against similar local authorities and what level of ambition would be required to reach their net zero target.

    The CEC have played a long game, through changes of leadership and council majorities, seeing changes along the way:

    “There have been frustrations over the years with the apparent lack of urgency and recognition of the scale of interventions needed across all council departments, but we have been pleased to see the council’s sustainability team grow, and whilst we recognise the challenges they face in terms of budgets, limited national government support, and perceived lack of a strong public mandate for climate action, we strongly believe that the new strategy should allow for a more ambitious approach, championing action, and providing the borough with clear goals commensurate with addressing the climate crisis.

    “The CEC has a greater voice now than perhaps at any time since 2019, and it has been invited to a number of discussions and meetings with the current administration who appear to be much more receptive to the CEC’s input; this includes an upcoming dedicated workshop with council officers to input into developing the strategy and action plans.”

    Press coverage

    Paul mentioned that part of the CEC’s outreach activities involved sharing the report with the Maidenhead Advertiser, resulting in the publication of an in-depth article. We were interested to hear more about this, and how useful the group had found it to get coverage in the local press.

    Paul explained, “The council is developing the second version of its environment and climate strategy, to run until the end of 2035, and we knew that we had to garner wider public awareness and support if they were going to recognise the need to create an ambitious, measurable and impactful strategy which would result in significantly increasing the pace and scale of the actions taken.

    “The Maidenhead Advertiser was one of our chosen routes as we made a conscious effort to share our report widely and strategically. We shared a copy with the Editor and chief reporter, and they then wrote the story with one round of consultation with us.”

    We wondered whether the CEC would advise other groups across the UK to try for coverage in the local press as a good campaign strategy. Paul thinks so:

    “We’ve had no shortage of letters and articles published in the Advertiser, but for some reason a news story seems to have much more impact. The press provides us with the opportunity to inform the public when the council is not meeting the targets it has set itself; even while we continue to work constructively together with councillors and officers.

    “Coverage in the local press should always form part of a good campaign strategy, but is even more impactful if used as one of a number of options and routes for getting the message out. Some of the data in the Scorecards is quite technical, and so difficult for those less familiar with it to fully appreciate, so in future we’d also look at issuing a press release in addition to the report itself, covering the main points.”

    Thanks to all the CEC’s activity, their report has been shared far and wide — but it had a secondary effect: the council also saw how useful the Scorecards could be in their own work. Paul says that the council have adopted them at community workshops to highlight priority areas for action.

    We are glad to hear it — and grateful to the RBWM CEC’s great efforts in putting the Scorecards to good use. Thank you to Paul for sharing his experiences.

     

     

    Image: Tom Bastin (CC BY 2.0) via Wikimedia Commons

  5. We prototyped a data hub for the VAWG sector, and it’s already raising important questions

    Around the world today, organisations and communities are recognising the 26th International Day for the Elimination of Violence Against Women. This is a moment to reflect on one of the most prevalent and pervasive human rights violations in the world – but it’s also a call to action.

    As one small action in that continued effort, we’ve been working with the End Violence Against Women Coalition (EVAW) this year, to explore how something like our Local Intelligence Hub could help their members organise for change.

    Little did we expect that, in building a prototype Data Hub for them to explore their needs, we’d discover a gaping hole in data collection about the safety of girls at schools in 16% of local authorities.

    But first, what were we aiming to achieve?

    In a post last month, I shared some of the goals of this work – such as using data to galvanise support from MPs, to monitor patterns that official bodies might miss, and to help EVAW’s members make the case for increased local funding to address violence against women and girls (VAWG).

    I also shared how we were using the batch request tools in WhatDoTheyKnow Pro (our advanced Freedom Of Information service) to generate new public data on VAWG prevalence in schools.

    And, of course, all of this work builds on top of the Local Intelligence Hub we designed and built with The Climate Coalition and Green Alliance – which has already proved its worth as a tool for community organising and public affairs, including through events like this Summer’s #ActNowChangeForever Mass Lobby, and The Climate Coalition’s Great Big Green Week.

    Now it’s time for an update – how did we get on?

    A replicable pipeline of brand new VAWG data

    When we built the Local Intelligence Hub with The Climate Coalition (TCC), much of the data we included was already publicly available: MP information from Parliament, demographic data from the ONS, public opinion data shared by polling companies. Combined with TCC member organisations’ own data on their local support and activities, the Hub was able to present a nuanced picture of climate and nature are being protected across the whole country.

    We knew we faced a different challenge with the VAWG data hub. As I explained last month, public data in this space is often incomplete, or missing entirely. We wanted to use this as an opportunity to test how WhatDoTheyKnow and the Local Intelligence Hub could work together to generate and then publish brand new datasets on VAWG prevalence or activity, made public through FOI requests to local authorities and policing bodies.

    We chose school safeguarding referral figures as a suitably challenging example that was also indicative of levels of risk to children. When school staff fear a child may be in danger in any way, they are meant to refer it to the safeguarding team at their local authority. The UK government collects some information about these referrals as part of its Children In Need census, but the definition of a “child in need” is somewhat open to interpretation, and we and EVAW both suspected that, as a result, the official data was only telling part of the story. The census also only covers local authorities in England, leaving Scotland and Wales to collect their own, incompatible data (the CRCS census in Wales, and Children’s Services Plans in Scotland).

    With the help of the WhatDoTheyKnow volunteers, we drafted an FOI request to be sent to every UK local authority with a responsibility for education, asking for three things:

    1. The total number of safeguarding referrals made to them, by schools in their area – this is data that technically should be collected by the CIN census for English authorities, but we suspect is not
    2. Any sort of categorical breakdown they held about those referrals, such as a breakdown of the genders of the children involved – this doesn’t currently appear in any public dataset that we know of
    3. The total number of schoolchildren in their area

    You can browse the requests and responses on WhatDoTheyKnow. Here are some key things we learned through the process:

    No matter how much you research your request, something will slip through

    Our background research and even our first pilot requests failed to reveal that the total number of schoolchildren is something that’s already published for England, Scotland, and Wales. Thankfully, many authorities simply pointed us to this data (with a “Section 21” refusal – “information already accessible”), but others continued to provide the data for each year we requested. Had we known in advance that the data was already available to us, we could have left it out of our requests to English, Scottish and Welsh authorities. We can only hope, since this is such basic information, the authorities who did go on to provide the data to us didn’t spend too long gathering it.

    You will receive information in every format imaginable, and your data extraction process needs to handle that

    We asked for responses to be provided in a “re-usable, machine-readable format” if the authority deemed the information to meet the FOI Act definition of a ‘dataset’. We think, in reality, very few of the authorities held this data in a format structured enough to count as a ‘dataset’, but a few did send over their data in spreadsheet format, which was nice to see! Others, however, sent us tables in Word documents, in PDFs, SharePoint links, even ASCII-art tables in raw email text.

    We also knew authorities might hold the information by calendar, academic, or financial reporting year, so we gave them the freedom to provide it to us in whichever scheme they had. Unsurprisingly, we received responses across all three (57% calendar year, 35% financial year, 8% academic year).

    Happily, the crowdsourcing interface in WhatDoTheyKnow Projects enabled us to make relatively quick work of extracting the data we needed, but we were ultimately only able to extract a fraction of the information some authorities provided and we found that some of the interpretation of the responses (ie: “is this a financial year, or an academic year?”) heavily relied on human intuition, which means we’ll need to think carefully about the way we structure future requests, if we want to process the data through any sort of automated pipeline.

    Complex requests are a risk

    The more information you request, the more useful it might be to you, but the more you risk the public authority refusing to answer it on “Section 12” cost grounds. WhatDoTheyKnow’s advice is to keep your request as short and focused as possible. But we knew that historical data, across a few metrics and a few years, would be most useful to the VAWG Data Hub’s users, so we asked for as much as we felt we could justify – and it mostly paid off.

    70% of responses to our batch request contained both key pieces of data we wanted (the total referrals for multiple years, and the gender breakdown). Another 7% contained just the yearly totals, without any gender breakdown.

    7% are still awaiting a response, even now, over a month after the statutory deadline. And 6% of authorities said they didn’t hold the information at all (because they, surprisingly, don’t record the referrals they receive). Which leaves 10% who refused our request on cost grounds. If our request had been simpler, this number of refusals would likely have been smaller.

    However, this result is in itself interesting: at least 16% of local authorities responsible for handling safeguarding referrals either don’t record them, or record them in such a way that it would take more than 18 hours of officer time to report how many they received in a given year, or how many relate to girls.

    If the government is serious about halving violence against women and girls within a decade, this is precisely the sort of data local authorities will need at their fingertips, in order to monitor progress and allocate resources. The fact that it’s effectively inaccessible to 16% of them right now is a worry.

    Combining data for new patterns and new questions

    Remember how I mentioned we were adapting the Local Intelligence Hub for EVAW’s needs?

    With our FOI data extracted through WhatDoTheyKnow, we were able to very quickly load it into a prototype VAWG Data Hub. Alongside it, we loaded in a whole new area type to filter by—“Policing areas” or Police & Crime Commissioners—as well as some examples of crime prevalence data (the number of reported VAWG-related incidents, by policing area) and public policy guidelines data (the Council Of Europe’s recommended minimums for VAWG service provision).

    Thanks to my colleague Alex’s improvements to our TheyWorkForYou Votes infrastructure, we were also able to make quick work of importing VAWG-related MP data into this new hub – including VAWG-related parliamentary groups that MPs might be sitting on, or relevant votes and motions they’d supported.

    Plus, of course, there was all the usual MP and area data that campaigners and public affairs teams have already found so useful on the Local Intelligence Hub – things like election results, public attitudes polling, and income and poverty indicators.

    Data in action

    With the data in place, it was possible for us to give EVAW’s member organisations a demonstration of how they could use a data hub like this as part of their campaigning, fundraising, and policy influencing work. For example, to find the council areas with the most school safeguarding referrals for girls and also the highest overall deprivation:

    Or to find the MPs with the strongest support for VAWG prevention, but in constituencies with high VAWG prevalence:

    All of the data we demonstrated this Autumn is still a work in progress, but it was reassuring to see almost 70% of members on a recent demo call saying that a VAWG data hub like this would “definitely” be useful to them in their day-to-day work.

    We look forward to honing the VAWG Data Hub further with EVAW and their members, to make sure we’re asking the right questions, and presenting an accurate picture of the VAWG landscape.

    Header image: Khyati Trehan, for Google Deepmind – Pexels Free License.

  6. What can we learn from a clock that’s stopped?

    Do you live somewhere that boasts a magnificent municipal clock — a timepiece that anyone passing by can look up to, and check that they’re nicely on time for their next appointment? A vast clockface on the side of the town hall, perhaps; or a golden clocktower standing tall above the shopping streets…a landmark under which to meet friends?

    OK, good. The next question is: does that clock actually work? 

    If its hands have come to a solid halt; or it’s running at a dogged twenty minutes behind time; or its rusted chimes, once mellifluous, now sound more like the rasping call of an imperilled frog, the chances are that it’s been logged on the Stopped Clocks website.

    Here you can see which clocks near you have fallen into disrepair; or check the time, which is delivered to you along with an apposite poem.  

    The site is the work of Alfie Dennen, who describes himself as “somewhere between a technologist and an activist — with a tendency towards action over academics”.

    It’s not entirely out of character, as he explains: “I’ve been building things for the web that blend activism and tech since the late nineties: for example We’re Not Afraid was a project that spoke to London’s — certainly my — defiance after the bomb attacks of 7/7; and Bus Tops was a project for the 2012 Cultural Olympiad which aimed to democratise both access to and the creation of public art.”

    Nor has the project sprung up overnight. It all began with Alfie’s realisation that he could become the clock-winder that kept his own local clocktower, in North London’s Caledonian Park, ticking.

    “Restarting the Cally clock in 2012 cemented for me that both documenting, and hopefully one day restoring, public clocks was a worthwhile thing to spend some significant portion of my life doing.”

    Now, this is a very mySociety-type project, blending coding, community, and a sense of shared responsibility. We probably would have written about it anyway. But also, in gathering the data he needed, Alfie made substantial use not just of Freedom of Information through our WhatDoTheyKnow Pro service, but also our MapIt points-to-boundaries software — so we have all the more reason to ask him all about it.

    To begin with, how did he realise that FOI might be a good tool to help with the site?

    As you’ll see if you click around, the project crowdsources information — so if you know of a local clock that hasn’t been included, you can add it yourself. To identify which council area they sit within (see the map page here), Alfie had been using MapIt to generate boundary information.That gave him a vague awareness of mySociety and our other services, including WhatDoTheyKnow.

    “I’d never used FOI before, but I realised that it’d be a great way to get baseline data over and above the data I can gather about stopped clocks directly — given that walking every street in the UK is a bit out of my current comfort zone!

    “When I went to look at WhatDoTheyKnow properly, I released that I could send FOI requests in a batch, and that got me super enthused. Suddenly the looming month-long period of finding spare time to do them one by one disappeared! WhatDoTheyKnow Pro’s batch process, and clear interface to manage status as they were responded to, has been such a useful tool for me.”

    Great — so, having sent off lots of requests, has Alfie seen any responses yet?

    “The deadline for responses was the 21 November for most of the requests, but there’s still hope that some more will come in. 

    “So far, of the 308 councils I contacted, 107 have given substantive responses. 70 councils responded to say they held “no information” and 113 are delayed or still pending. Between them, they identified 231 council-managed clocks, of which 175 are working, 34 have stopped, and 22 have an unknown status. 

    “So about 15% of council-managed clocks are stopped, which honestly is better than I expected. But here’s the thing that really stood out: when I cross-referenced this with my database of 243 stopped clocks, it turns out that only about 40 are actually council-managed. The vast majority (so far) — 84% — are outside of the scope of Freedom of Information as they are in private hands, or owned by churches, or other bodies that used to be public but aren’t any more. In this sense, privatisation has created a clear accountability gap.

    “The responses have varied greatly, which is interesting in itself. Some councils sent back detailed spreadsheets detailing every clock they manage, with maintenance schedules, budgets  — the lot. Some look after lots of clocks, while others have none at all — apparently. Only 6.4% of councils could tell us what they spend on clock maintenance. Of these, the average maintenance cost was £2,929/year.

    So, curiosity aside, how will all this data be put to use?

    “A big part of why I’ve been doing this foundational research through FOI requests is to provide a backbone to a book I’m writing, which looks at the last 45 years of austerity in the UK through the somehow very human lens of stopped public clocks. 

    “Yes, stopped clocks are a small thing in the round when we look at all the issues facing our communities, cities and civic spaces today.

    “But they’re also the perfect way to talk to people about how they feel about their town, their neighbourhood, their city. Ultimately this is my main aim: reaching people where they are to talk about re-engaging with our civic space, coming together and understanding each other and our built spaces in ways we once did but have lost sight of.

    “The civic infrastructure which once supported the maintenance of public clocks has been systematically stripped away through a combination of privatisation and austerity dating back to 1979. 

    “But I wanted the data to be useful beyond just the book, so I’ve built it into the Stopped Clocks website as an interactive policy tab. You can see the map of all FOI-tagged clocks, filter by ownership type, and read through the timeline of disinvestment.

    “More practically, it means when someone finds a stopped clock in their town and wonders, “who’s supposed to be looking after this?”, maybe they will find the answer on the site. And if they want to campaign to get it fixed, they’ve got evidence: council responses, maintenance data, the broader context of how we got here.”

    Each clock boasts a number of tags, so for example you can see which data came through FOI requests, and which council area they’re in — that part is thanks to MapIt.

    “I’m also tagging clocks with their ownership type, which is a somewhat manual process; and whether they’re on listed buildings, using the Historic England API. 

    “Tagging lets us start seeing clearer patterns, like how Lottery-funded church restorations from the 90s are failing on a very predictable timeline, or how privatised civic buildings — former town halls, libraries — now in commercial hands are disproportionately neglected.”

    And so finally, what plans does Alfie have for the project? Presumably he has a strong incentive to avoid the irony of its becoming an untended asset itself.

    “I can see a path towards a Stopped Clocks charitable foundation that does two things at once: gets clocks running again, and uses that process to rebuild civic engagement at the local level. 

    “Because here’s what happens when you try to fix a stopped clock: you immediately find out who owns it, who’s responsible for it, why it stopped, why nobody’s fixed it. And that leads you straight into conversations about council budgets, privatised buildings, who decides what gets maintained and what doesn’t.

    “It’s a way into talking about austerity and privatisation that doesn’t feel abstract or preachy. It’s just there, on the town hall, stopped. 

    “People care about these things; they notice them every day, they remember when they worked. That gives you something to organise around that’s tangible and achievable. 

    “Fix one clock, learn how the system works — or doesn’t; build the relationships and knowledge to tackle the next one.”

    Many thanks to Alfie for talking to us about this project: we hope it inspires others to think differently about the assets that make up our public domain — perhaps even to ask if you can be the person who winds your own local clock.

    Image: Kelsey Todd

     

  7. Mayoral scrutiny: building an ecosystem of accountability

    Mayors and combined authorities are the future of devolution in England,  but the ways in which citizens can understand, scrutinise, or influence them remain unclear.

    Our latest report, Mayoral scrutiny: supporting an ecosystem of accountability organisations, argues that devolution will not deliver on its promises unless we also invest in new forms of civic and democratic oversight. It is not enough to create powerful new Mayors; we need to create the ecosystem that holds them (and the wider web of regional institutions) to account.

    Why scrutiny matters

    Combined authorities are designed to bring councils together to plan and deliver across a region. But unlike the London model, they do not have an elected assembly meant to hold the mayoral executive to account.

    Existing models, such as council scrutiny committees or parliamentary hearings, can only go so far. Combined authorities need scrutiny that reflects the full complexity of their networks and partnerships.

    A scrutiny and civic development fund

    We highlight two complementary approaches already being explored:

    • Local Public Accounts Committees (LPACs): technocratic bodies that examine how public services work together across a region, looking not only at the Mayor’s decisions but at value for money and collaboration across agencies.
    • Democratic journalism funds: public-interest media funds guided by citizens’ assemblies, ensuring independent, locally relevant journalism that supports democratic life.

    We propose bringing these ideas together in a new Scrutiny and civic development fund: a local grantmaking body with priorities set by a citizens’ assembly. The fund would support a mix of civic institutions — from expert-led scrutiny committees to independent journalism — that together strengthen public accountability and regional identity. Approaches along these lines would help ensure that devolution does not just move power geographically, but makes it genuinely more responsive to the people it serves.

    Supporting existing scrutiny

    This report also explores ways we could apply our existing tools and approaches to sustain and connect the accountability ecosystem that already exists. Through tools like MapIt, TheyWorkForYou, and WhatDoTheyKnow, we can build a civic democratic stack to support journalists and civic technologists to understand and monitor combined authorities.

    We’ll also continue to explore how civic tech can make these new layers of governance more transparent, and how data and digital infrastructure can support the work of local scrutiny.

    Read the full report

    The report explores the history of scrutiny in English devolution, how these proposals could work in practice, and sets out the steps to strengthen the civic fabric around mayors and combined authorities. You can read it here. 

    Header image: Photo by Omar Flores on Unsplash

  8. Running open LLM models

    Most discussion and usage of LLMs is focused on high profile closed models such as OpenAI’s ChatGPT family, and Google’s Gemini – which are widely available and integrated into a range of existing products and services. 

    Because these are closed models, access and hosting of the models is controlled by the companies that create them. This presents a dilemma for civic tech organisations who believe in open source – where important parts of their processes can disappear into black boxes beyond your control. These may work well/be affordable today, but creates new risks. Specific models might become unavailable, there might be changes in pricing, and this represents lock-in to specific providers. 

    Open LLM models provide an alternative approach. In a familiar issue from open source licensing,  there are different ways in which a model can be ‘open’. Open weights models have the final structure of the model released and can be run on your own hardware (Meta’s Llama model is an example of this). Fully open models have the underlying (open licenced) training data released, as well as the recipes and evaluation systems used in their training. AI2’s OLMo family of models and the recent Swiss AI institute’s Apertus model are examples of these. Somewhere in between these are approaches like IBM’s Granite models, where the model is released as open weights and the data was licensed to be able to train on (addressing copyright issues) but is not publicly accessible. 

    What are weights? Basically a model can be understood as a big network of connections – where the ‘weights’ are how strong (and influential) a connection is. What’s happening in the training process is a refinement of these weights as a result of being exposed to the training data. The weights at the end of the process are the trained model, and can be shared and used by others. But if you also have the training data and process, you can recreate the model step-by-step, with a clear audit trail of what’s in it.

    Any kind of open weight model is practically appealing because they unlock new ways to work with private data without sharing with third parties, and create more flexibility around infrastructure. For instance, we currently use a fine-tuned version of Llama to help flag immigration correspondence in WhatDoTheyKnow.

    Fully open models are ethically appealing because they avoid the issues of models that have been trained on copyrighted data. Their existence is a challenge to an AI policy debate where countries must trade-off the rights of creators against the benefits of AI as sold by a handful of companies.  They fit well with our open source ethos – and understanding more about how to use them practically helps give us options to improve our own services, and contribute to wider arguments about responsible use of AI.

    This blog post is a write-up of several practical experiments in using the 7b parameters variation of OLMo-2 both locally on a laptop GPU and remotely using HuggingFace’s inference endpoints. 

    Using OLMo-2 locally

    Our purpose in running something locally is to be able to process sensitive information that should not leave our infrastructure. In this case, using OLMo-2 to create human-readable representations of clusters from WriteToThem survey responses. While users are asked not to include personal information in this survey, enough do that we need to treat the basic dataset as having personal information that should not be shared.

    We used llama-cpp (and the associated python bindings) to run the local model. An alternative local approach is to use ollama to run a local server. The reason for using llama-cpp in this case is that ollama doesn’t always seem to pick up that less well known models can use ‘tools’ correctly (which is required for structured data output). Another benefit is having it run in process rather than as a separate server is the script can turn on and off the resource intensive bit (although there’s a corresponding start up time) rather than needing a separate server process to run.

    Setting up the libraries

    Installing llama-cpp in a way that can use the GPU is not straightforward. This set of instructions for Windows 11/Nvidia GPU mostly worked for me. I additionally needed to add an extra DLL directory before importing from llama_cpp because there’s a DLL folder that the library wasn’t yet referencing. 

    Big picture, WheelNext is a project to try and make installing correct versions of the library easier across different OS/GPU combinations. In the meantime, setting up a local machine is a bit fiddly.

    Downloading model information

    Llama-cpp uses GGFU files – which have all the weights in a single file. There are libraries to convert from the transformers format – but this is often made available by model publishers on HuggingFace.

    Downloading the model can be done using the huggingface_hub command line too (here using uv). 

    uvx –from huggingface-hub hf download allenai/OLMo-2-1124-7B-Instruct-GGFU olmo-2-1124-7B-instruct-Q4_0.gguf –local-dir models

    This is pulling down a quantised version – which has the same number of parameters – but the values of the weights have been significantly rounded down. This tends to have much less decrease in quality than the corresponding decrease in file/memory size (why? Broadly high fidelity here is useful for adjusting in training which will happen in small shifts, but when you have something working the general structure is good enough)  – and this fits it just inside the ability of my laptop’s GPU. 

    This download can also just be done in code:

    from llama_cpp import Llama

    from functools import lru_cache

    @lru_cache

    def get_llm():

    return Llama.from_pretrained(

        repo_id=“allenai/OLMo-2-1124-7B-Instruct-GGUF”,

        filename=“olmo-2-1124-7B-instruct-Q4_0.gguf”,

    )

     

    Structured data output

    To get structured data out of the model, Pydantic AI can be used with Outline to query the llama cpp model.

    This:

    • makes it easier to define Pydantic data structures that should be returned.
    • makes it easier to swap between local/remote models by swapping the model passed to the agent, but otherwise using a common API.

    Hosted OLMo-2 model

    An advantage of any open weights model is being able to run it on a range of infrastructure (and being able to change the infrastructure later). 

    In this case, I had a use case where we wanted to do transformations on already public data (the appropriateness of linking to a specific Wikipedia page from a specific sentence in a parliamentary debate)  – and so there was no privacy/security issue for the purposes of the experiment. We are doing further exploration about how we can make this kind of use compliant with our wider legal and privacy commitments. 

    Because OLMo-2 is not a commonly used model, there isn’t an inference service that offers it directly as an option (which would be most efficient – as you’re being charged for tokens while the underlying infrastructure is shared between many users). Instead, you need to create a private server that can manage the model. 

    Creating an endpoint

    Hugging Face Inference Endpoints is the approach I used here – that lets you provision an endpoint connected to a specific model. I’m using the same model as I used locally.

    Depending on the properties of the model – the minimum GPU required will be suggested. This model was coming up about $0.8 an hour. Running the 13b parameter version of the model was about $2 an hour. There are options to run on AWS, Azure and Google Cloud in different regions (although processing data in the EU/UK is a requirement – this limits some of the GPU options). 

    The scale-to-zero time is adjustable down to about 15 minutes. It takes a few minutes to load up from this. In principle, if the access token is scoped correctly – the huggingface_hub library can handle pausing and unpausing the endpoint (or even programmatically creating one), if some more control here is wanted. 

    Structured data output

    This endpoint works well using some of the example HuggingFace connections for PydanticAI. Something I had to adjust was adding an adapter to reduce complex json schemas (e.g. anything with multiple model types, enums, etc) from using ‘$defs’ to just being a normal structure because the Hugging Face text-generation-inference interface can’t handle them. 

    I have an example of creating a model that Pydantic AI will accept here – the missing config bits are a token associated with the account and the url of the endpoint created. 

    So in principle this means we can have an endpoint that gives us access to a GPU based model for an hour a day at a reasonable price – while we could at a later point swap out to use a local model without adjusting the general logic of the application. This is well suited to our current anticipated uses in batched backend processes, but would be less efficient if it needed to be responsive around the clock.

    Reflecting on the results

    Compared to previous projects using the OpenAI API, a key thing to note is it is slower and more fiddly on the infrastructure at hand. I was only using the 7b parameter model, while the 32b parameter model is the one that evaluates closer to GPT-4o mini. As such, prompts needed to be a bit more detailed on what was required. Similarly, a combination of the hardware and not being able to run queries in parallel over a wider infrastructure mean the process takes longer. 

    But this is also like comparing cake to a well balanced meal – the benefits of an open model are not just philosophical but practical. With a bit more work on the prompt you can get useful results on a laptop with no dependency on third-party services. That brings into scope a range of use cases that OpenAI is not suitable for. 

    Even where, such as in the Wikipedia example, there are no privacy issues in using OpenAI, making it easy to swap in an open model makes it much easier to evaluate the effect of using an open model. It will now be relatively straightforward to quickly substitute OLMo-2 into PydanticAI flows using other models and get a baseline feeling for effectiveness. Even where you might choose to use a closed model in a specific instance, it is very useful to work in such a way that you are not locked in to that model and could switch away in future.

    Similarly, having a working process for a non-mainstream model like OLMo-2 makes it easier to explore other models like Apertus. As this has been trained on a wider range of non-English languages it could provide a more dependable component in LLM integration with the core Alavateli software – which powers Freedom of Information platforms across a range of languages. 

    Understanding open models as a practical approach helps contribute more widely to policy conversations around AI – and where trade-offs and impacts are inherent to the nature of the technology, or are a consequence of how they are currently controlled and produced. 

    Open models are always likely to lag slightly behind the frontier models, but they are already incredibly useful technologies compared to what was possible a few years ago. We want to understand more about how we can practically make use of these models – and help make sure the future of LLMs are shaped by ethical considerations about their training and use – rather than accepting them on the terms of the dominant tech giants. 

    Header image: Photo by Zhang Zi Han on Unsplash

  9. Using LLM tools to build APPG scrapers

    Recently we wrote about why we’re now listing APPGs in TheyWorkForYou. This blog post goes into more detail about the technical process we use to gather who is a member of an APPG.

    We have two methods of getting the memberships of APPGs. The first is finding if it’s already published on their website. The second is using Parliament’s rules to ask the APPG contact for the list. So we need to a) find all the APPG websites, and b) see if they publish members lists c) if not, ask for the list and d) get those lists into a consistent format.

    Data that is fragmented and not in the format we want is a fairly common civic tech problem. The solution is to write a ‘scraper’ that reads the content of a website and has a process for converting it to a more structured format. 

    This works well when dealing with only a few sources (e.g. the memberships of the UK’s parliaments only needs a few different scrapers), or where a common format is being used (e.g. many local government websites use similar providers). In the case of APPGs, there is no common template being used. We just have a set of a few hundred websites that may (or may not) contain a list of names. 

    Rather than a traditional scraper, we have built an agentic AI/LLM approach that is more flexibly able to extract memberships from websites.  The end result is a tool with a careful sequencing of manual and automated steps, injecting human review in structured ways. Rather than an “AI makes mistakes” disclaimer, we built a structured process to check elements efficiently one group at a time, that can lock off errors before proceeding to the next stage. This was also an experiment in using LLMs to write scraper tools, as well as some of the tools needed for the manual review steps. 

    Practically, this was an effective way of getting the information we needed that turned a very hard problem into one that we can dependably run regularly. It also suggests more generally useful ways of approaching fragmented data problems (more on this at the end of the post). 

    Building agentic approaches

    An ‘agent’ is often poorly defined, but broadly it’s a language model interface is given tools (specific functions), a task, and an output data structure, and it loops between these until it gives a result. 

    To build agentic functions, we used the PydanticAI framework, which acts as a connector between the prompt, input data, the data structure of the output data, functions the agent has access to, and any bespoke validation of the results. The end result is a function that accepts structured input, and returns structured output, relatively painlessly. 

    Although this example is using OpenAI’s GPT models, in future experiments we use the PydanticAI approach to connect to open source models (the framework is designed to be model-agnostic). In principle this means that this project could in future switch the underlying provider used. 

    Process

    Step 1: Writing a scraper 

    The first thing we needed to do was to get the official data from Parliament’s APPG register into a more structured form. 

    You can see an example of this page for the Africa APPG. This is a good task for a traditional scraper, but would also have been a fiddly problem. Using ChatGPT, we gave it an extract of the HTML, and asked for a Pydantic data structure and script to convert the data. This worked pretty well, with some tweaking to the format over time. When errors emerged in different APPGs – passing the error and an understanding of what should have happened back to the Copilot agent (using a Claude model) led to working fixes. In using the coding agent the key decision was deciding which bit of the project to be opinionated about – and this has mostly meant being very explicit about data structures (and validation to ensure they’re correct), and more relaxed about the pipes that connect things up. 

    Step 2: Adding categories to APPGs

    From the official data, we only know if an APPG is a county or subject area group. We want to make it a bit more explorable by breaking this down into categories. 

    In the spirit of experimenting with LLMs, we copied all subject areas APPGs names and purpose statements into one of OpenAI’s reasoning models and asked for 10-20 sub-categories. It came back with 20 and they looked reasonable.

    We then created a small functionless agent interface, giving it the title and purpose of a specific APPG, and returning a list of potential categories (preferring one, but allowing all that seem relevant). 

    Spot-checking these, they seem reasonable and for the purpose of breaking down the big list a bit – this is a good step up. This means, we can quickly see the APPGs that are likely to be relevant to environmental matters

    Step 3: Finding missing websites

    Some APPGs list their external website – some do not. Here we use AI tools as part of the workflow, to find those missing sites (which may not exist). 

    We created an agent function with access to a web search tool (tavity), a function to check if the URL is valid, and a prompt to help identify the correct site. This creates a loop to search and identify a good candidate for the website.

    At this point, there is a manual check that prompts the user to review each site one-by-one before confirming it as a valid site. 45/74 sites identified in the first wave were valid. Invalid websites were news articles, APPGs in other parliaments, or sites for previous iterations of that APPG.

    This is not comprehensive and we and our volunteers found some more manually after the fact – but it is an interesting trial in finding data starting only with a search engine. 

    Step 4: Find published members

    The final step is to get a list of members (if published) off these websites. We need a really flexible approach for this. Names might be in a structured list, but they can also be in one paragraph. They might be on a members page, the home page, or spread over three pages. There is no consistency to fall back on. 

    Here, we created an agent with a function that can fetch a web page and convert it to markdown. Using this recursively, the prompt instructs the agent to find the most relevant page (in some cases pages) that could contain membership information, and return a data structure of the members (MPs, Lords, Other). This returned over 5,000 names in the data format provided. 

    The big risk at this point is that having been asked for a list of MPs, it makes some up. The validation we use for this is to check if each name in the list is present within the HTML content of the page it was extracted from. If there’s an error, it runs again and will give up rather than use an incorrect list.  There is some possibility for misinterpretation – but this prevents outright fabrication. Errors flagged here tended to be when the LLM has fixed formatting meaning the text no longer matches exactly against the page.

    The key problem here is one that a human would have too – some APPG lists are out of date. Here I added an extra flag detecting a list containing people who had left Parliament that then needed a manual review. In other cases, this was sometimes picking up lists that were not membership lists. We made some adjustments to the prompt after picking up attendees at the AGM – which is not wrong, but incomplete. 

    Step 5. Manual data

    As our main blog post talks about, we then needed to contact APPGs directly for lists that were not published. This presented a new problem: what we got back was a combination of spreadsheets and emails with different levels of detail – some including party details in other columns, some not. 

    Our solution was to have a Google Doc that just has each list formatted under a heading with the APPG title – we could just copy and paste information into this. 

    This file is then downloaded as markdown and converted into a list of names. There are a few tweaks to clean up leading numbers, and identify the name component of the line. Again, this step was substantially written via prompt – giving the LLM examples of the problem data, and that would create regular expressions to clean the data into the basic list of names we needed. 

    Step 6: Tidy members information

    What we want to do next is get from a list of names to a list of TheyWorkForYou unique IDs. 

    We have a library that helps reconcile names to IDs, but a challenge here is that there are a huge range of spelling mistakes (sometimes to an extent where you could not actually work out the correct MP).

    What we needed was a quick tool to compare the input name against our list of known names and suggest near matches. Here we again turned to the coding agent, posing the problem, providing some snippets to interact with our existing library, and letting it craft a command line interface. 

    This fairly quickly gave a good interface for reviewing spelling problems (which was later refined to auto-match below a certain threshold). This helper tool is not especially complicated, but as something with a clear input and output, isolated from the rest of the flow, was a good candidate for testing using Copilot to create the function. In choosing what to spend time on, this would not otherwise have been a priority – but brought a useful feature into scope. 

    Result

    The end result of this process is fairly effective – with a series of steps we can repeat every six weeks when a new APPG register is released to check for new webpages for new APPGs, or to recheck previously scanned pages. 

    The efficient sequencing of steps means that manual review happens on similar tasks in sequence, rather than checking each APPG through all steps. 

    In general, I’m pretty happy with the results of this, it made a project that would otherwise only have been possible with a big (and fairly boring for participants) crowdsourcing effort possible. 

    One of the problems we have to deal with a lot is fragmented public data, when relevant data is scattered all over the place and is a lot of work to bring back together. Here we found AI tools that were both useful in discovery of a component of the data, and in reconciling to a common standard. 

    The “AI scrapes then verifies content is present” approach worked well here but would struggle with more complex problems. For instance, if we really needed to be sure we were extracting a correct party label alongside a name, knowing that ‘Labour’ was present on the page wouldn’t be as helpful. 

    Building on this, the AI-written scraper code worked pretty well. If properly sandboxed (pydantic-ai has support for running python in a sandbox using pyodide), transformation code could be written to convert data between different sets of headers without running the data itself through an LLM to convert it. This potentially helps with some of the fragmented data problems of reconciling compatible but different schemas. LLM-involved approaches have a real potential to create new datasets through easier discovery and joining of data.

    This is a way we can use new technology to make a dataset possible, but also it would be much easier if Parliament gathered and published this in the first place. The equivalent Cross Party Groups in the Scottish Parliament just make a downloadable file of all memberships in their open data portal. We need to think about how new technological approaches are not just propping up bad transparency – but part of encouraging better transparency all the way upstream. 

    Header image: Photo by Susan Holt Simpson on Unsplash

  10. “I found inspiring things happening in councils all over the UK, pockets of brilliance!”

    Our partner organisation Climate Emergency UK put us in touch with Mat Allen, based in Northern Ireland. They described him as “one of our most dedicated Scorecards volunteers” — and when we heard what he’s been up to, we could certainly see why.

    Volunteering for CE UK is, for many, an opportunity to do something tangible and impactful around climate action. “Getting busy and doing something useful can counter the effects of negative stories,” Mat told us.

    So how did it all begin? He explains: “My wife forwarded me a link to an article about CE UK, and I was struck by the importance of somebody taking oversight of the action taking place to address that a third of greenhouse gas emissions that can be influenced by our councils. 

    “That is a bit niche, but addressing the climate emergency requires so many things to be done at the local level that I thought this could be something I could usefully contribute to.”

    Volunteering as a marker

    So Mat got involved. “I signed up to be a marker for CE UK’s 2023 survey, and was assigned a batch of UK councils to score against the criteria laid out in the Buildings and Heating section.” 

    There’s no denying that the marking work can be complex, so how did Mat find it? 

    “The process was well documented”, he says, “with support from the small CE UK team, and other volunteers available to give guidance when needed. While searching the internet for the evidence that allows marks to be awarded, I found inspiring things happening in councils all over the UK, pockets of brilliance! Some I recommended for inclusion in the Best Practices section of the CE UK website for others to enjoy, and perhaps replicate in their own areas!”

    Once the 2023 Scorecards, based on those marks, were published, Mat was able to assess his own region. “The challenge in Northern Ireland became apparent, with much lower scores than the rest of the UK. Our eleven Northern Ireland councils have many challenges — as do all UK councils — with the cost of living crisis putting immense pressure on service delivery, and the level of rates chargeable (yes, we still have rates over here!).

    “I made useful contact with my own council, Mid and East Antrim, who gave consideration to our recommended ‘easy wins’ — the actions that can have greatest benefit with least expenditure. They were facing huge financial challenges that year.”

    Coming back for more

    That was enough to bring Mat back for the next round of work — and this time, he got even more involved!

    “I was properly hooked by the time CE UK was seeking volunteers for the 2025 survey, and I signed up as a marker and an auditor, this time in the Transport section. 

    “As an auditor, I reviewed the Right to Reply responses made by councils to their initially assigned marks, to determine if scores should be changed based on the new evidence they supplied. This was more challenging, often requiring further online research, and comparison with other councils, to ensure scores were  fair.”

    One perhaps unexpected result that we hear from many volunteers is how assessing councils’ climate action can lead to a better understanding of the challenges they face. Mat feels this too:

    “I’ve learned a lot while marking and auditing, both about the complexity of council operations, and about successful climate action. The council staff involved are trying their best to do the right things, but surrounded by challenges of understanding and prioritisation. I feel for them, as they try and do right by their ratepayers and the planet!”

    Getting the word out there

    A small organisation like CE UK doesn’t have a big marketing budget, so anything that helps spread the word is useful, especially from those on the ground who can forge links with their own councils. Mat was able to assist here, too:

    “As the release date for the results of the 2025 survey approached, I wanted to get more impact locally than we achieved with the results of the 2023 survey. I signed up as an ambassador for Climate Emergency UK (have yet to be offered a Ferrero Rocher!). 

    “Along with my daughter, we decided to act locally, trying to gain traction with my own and the other two County Antrim councils (Antrim and Newtownabbey, and Causeway Coast and Glens), by holding a public launch meeting in Ballymena to publicise our initiative.

    “With help and support from CE UK and Friends of the Earth, we held that meeting in June 2025 in Ballymena. The climate change teams from Antrim and Newtownabbey and Mid and East Antrim Councils joined us, as well as Councillor Quigley and residents from all three target council areas. 

    “Thanks to the efforts of Councillor McShane from Causeway Coast and Glens Council, we made contact with their newly appointed Climate Change Manager the following day on a Zoom call, and we look forward to ongoing useful engagement with CCC&G! 

    “We were pleased to award the ‘Most Improved NI Council’ award to Antrim and Newtownabbey in the presence of our local newspaper, The Ballymena and Antrim Guardian.

    “The meeting was worthwhile, helping us at CE UK better understand the challenges these motivated climate teams face, and I hope introducing those folks to useful case studies and information about best practice we can offer.”

    Looking to the future

    Mat is a great believer in communication, saying, “Perhaps the greatest challenge we all face, and more so in Northern Ireland than elsewhere in the UK, is public engagement, and our councils are important players as they have those everyday interactions and influence with residents and communities.

    “The work goes on in councils all over Northern Ireland, and at CE UK we are taking stock and thinking of how we can best help our eleven councils progress essential actions to reduce emissions, bringing communities with them, and prioritising the needs of the vulnerable.”

    Mat has found something valuable in CE UK, beyond the ability to get out and do something: a set of data that backs it all up:

    “Taking effective action — in anything — is helped by objective measures and targets. Climate Emergency UK is the only organisation offering such measures in the UK, and we research and publish these measures for all councils for free!”

    Finally, he says, “We hope to continue engagement with our three Country Antrim councils, and would like to make contact with, and help the other eight Northern Ireland councils add more objectivity, breadth and substance to their climate action plans. 

    “We would welcome contacts from the Climate Change Teams and councillors across the province, and we hope to invite more councils to an event to launch the 2027 CE UK Council Climate Action Scorecards!

    If you are reading this and you are one of those councils, do drop CE UK an email at declare@climateemergency.uk

    Many thanks to Mat for sharing his journey as a CE UK Scorecards volunteer — we hope it will inspire others who are wondering how to play their part! CE UK are not currently recruiting for volunteers, but when the next round of activity starts up, you’ll be able to see opportunities on this page.

    Image: K. Mitch Hodge