It finally feels like Spring is in the air, and you know it’s been a busy start to the year when we’re rolling the first two months into one set of monthnotes – in the middle of March!
So, what have we been up to?
Well – we’ve been adding datasets to and testing our alpha version of the Local Intelligence Hub tool that we built with The Climate Coalition. Feedback has been really good and this feels like something that’s really going to level up the ability of UK climate organisations to share data and coordinate their actions, at both a local and national level. We hope to share more about this project in the coming months, once it’s been made available to TCC members.
We submitted talks to a couple of conferences/events – and lo and behold, we’ll be in Sheffield (and online) for the Festival of Debate on May 24, with a panel of exceptional guests. Our topic? “What if you could reshape democracy for the better — and you had 20 years to do so?” Climate is sure to be part of the answer. Fancy joining us? Book here.
Between all this we’ve been working hard with our friends at Climate Emergency UK on the next round of the Council Climate Scorecards. Their draft methodology was released in November 2022, and the first round of marking started in January 2023. Part of our support has included building a Django application to store the marking data – and this has already dramatically improved the experience for Climate Emergency UK’s volunteers.
Climate Emergency UK are also working with mySociety’s Transparency team, using WhatDoTheyKnow Projects (a WhatDoTheyKnow Pro beta feature that helps researchers crowdsource data out of batch FOI requests) to gather some of the data for the scoring. All their FOI requests will be published on WhatDoTheyKnow later this year.
Our IICT grants are coming to an end soon – we’ve put out a blog post about Lynsted Community Kitchen Garden and the data they’re collecting with the weather station we funded. They have a public event on March 25 if anyone lives near Lynsted and wants to visit to check it out! Updates from Possible and Better Futures should be coming soon.
On the research side, we launched our report on unlocking the value of fragmented public data, which is part of our work into the data ecosystem around climate data. Our plan over the next few months is to support a few research commissions which link in to this report and help to show use of climate data.
We’ve confirmed a partnership with Dark Matter Labs – we’ll be moving forward with them and our Neighbourhood Warmth prototype, exploring how we could encourage neighbours to come together to take their first retrofit action, such as getting a house survey. We’ll be building a working prototype over the next few weeks, then testing it out with communities in three pilot areas around the UK, to ensure that what we’re building makes sense to the people we’re aiming to serve.
And finally, we met up in person! We had a team meeting in early February which was a wonderful chance for us all to take stock of the last year, and discuss the future. We’ve been making some plans for year 3 of the Climate programme and after widening our scope through prototyping, now we’re going to be focusing back in again on building and proving the impact of the services we’re running.
That’s a very whistlestop tour of our first months of 2023!
Image: Daniel James
As a joint project between mySociety and the Centre for Public Data, we have written a set of simple principles for how to get the most impact out of publishing public data. You can read the report online, or download it as a PDF.
Fragmented public data is a problem that happens when many organisations are required to publish the same data, but not to a common standard or in a common location. Data is published, but without work to join up the results, it rarely has the intended impacts.
The results of this are frustrating for everyone. Data users cannot easily use the data, policy makers do not see the impact they want, and publishers in public authorities are required to produce data without seeing clear results from their work.
Better and more consistent publication of data by local authorities helps enable understanding and action at scale across a range of areas. At the same time, we recognise that the technical advice given has assumed higher levels of technical capacity that in practice is possible for many data publishing tasks. Our goal has been to make sure our advice makes data more accessible, while having a realistic idea of technical capacities and support needed for data publishing.
This report recommends three minimum features for a data publishing requirement to be successful:
- A collaborative (but compulsory) data standard to agree the data and format that is expected.
- A central repository of the location of the published data, which is kept up to date with new releases of data.
- Support from the data convener to make publication simple and effective – e.g. through validation and publication tools, coordinating returns, and technical support.
We recommend that:
- Whenever government imposes duties on multiple public authorities to publish datasets in future, it should also provide the staff and budget to enable these features.
- The Central Data and Digital Office should publish official guidance covering the above.
You can read the report online, or download it as a PDF.
Better data publishing helps climate action
This project is informed by recurring frustrations we have run into in our work. Projects such as KeepItIntheCommunity, which mapped registered Assets of Community Value, were much more complicated than they needed to be because while transparency was required of authorities, coordination was not – meaning the task of keeping the site comprehensive and updated was enormously difficult. In principle, we could build a tool that empowered communities in line with the intentions of the original policy makers. In practice, a lack of support for basic data publishing made the project much harder than it needed to be.
This problem also affects our work around local government and reducing emissions. Local government has influence over one third of emissions, but much of that is indirect rather than from the corporate emissions of the authority directly. As such, many activities (and datasets) of local government have climate implications, even if the work or data is not understood as climate data. For instance, the difficulty in accessing the asset data of local authorities makes it harder for civil society to cross-reference this information with the energy rating of properties, and produce tools to help councils understand the wider picture.
In future we will be publishing in more detail the kind of data we think is needed to support local authorities in emission reduction – but emissions reduction cannot be isolated from the general work of local authorities. Improving the consistency of the data that is published helps everyone better understand the work that is happening, and makes local government more efficient.
Sign up to hear more about our climate work, or to the monthly mySociety newsletter.
Photo credit: Photo by Olav Ahrens Røtne on Unsplash
Data is at the core of everything we do at mySociety, and the better quality it is, the easier our work becomes — so the latest output from TICTeC Labs is particularly welcome. We would love everyone to know exactly what constitutes good quality data!
And, thanks to the members of the Action Lab #3 working group, now they can. They awarded a contract to the Canadian civic tech group Open North, to devise a course on Data Quality. This course is free to everyone, and we know it’ll be of huge benefit to the international civic tech community.
Available online in English and French (and hopefully with more languages to follow), the course provides users with a practical introduction to the topic, discussing key concepts and setting practical exercises.
Quality information for civic tech success
This output was the end result of our third TICTeC Labs Civic Surgery, which took place back in March 2022. That saw participants discussing the theme: ‘Accessing quality information for civic tech success: how can we overcome barriers to accessing good data and documentation?’ — it was within this session that the concept of a training course first arose.
This course uses Open North’s existing learning platform to provide training which covers:
- Understanding the importance of data quality
- Understanding the key terms when engaging with data
- Knowing how and where to find good quality data
- Recognising the barriers to accessing data and documentation
- Knowing how to evaluate the quality of a dataset
Collaborating with the Action Lab members throughout the process of planning and building the course, Open North have created an online educational resource that is suitable for a wide range of audiences. It provides a starting point for those already working with data, or those at the beginning of their journey.
Take the course
You can find out more, and take the course by signing up to Open North’s Training Center and then looking for Data Quality (D103), with the French version at La qualité des données (D103F). In fact, once your account is activated you can take any of their free courses, so take a look around and you might find some more resources to try, as well.
November was another busy month for our Climate programme, with progress on a number of fronts – from the return of an old friend, in the shape of the Council Climate Scorecards; to the development of two new ones, as a result of our prototyping process earlier this year. We’ve also been working hard to share our data and tools with new audiences. Here’s a quick round up:
Constituency data for climate campaigners
As Alexander mentioned in October, we’ve been working on a Beta version of platform that brings together data about MPs, constituencies, and local climate action, as part of a project with The Climate Coalition. The aim is to help campaigners at both national and local levels to understand where to focus their efforts on enabling real local action on climate goals.
This month—thanks to the involvement of not only Struan and Alexander but also Graeme, on loan from our Transparency programme—we’ve made lots of progress, adding the features and importing the datasets we’ll need for testing out the minimum viable product with target users in the New Year. I look forward to sharing more with you in the coming months!
Exposing high-emissions local authority contracts
Another service that’s come out of one of our earlier prototyping weeks is ‘Contract Countdown’, which aims to give citizens advance notice of large, high-emissions local authority contracts that might be expiring in six, 12, or more months.
This November, Alexander finished developing the final pieces of a working Alpha version – including the use of real contracts from UK Contracts Finder and the Find A Tender service, and pulling in the details of local authority climate officers and councillors with climate/environment responsibilities (so we could test the idea of helping users contact these representatives).
And Siôn and I have been testing the alpha with target users – including local and national journalists, local authority climate officers and procurement officers, and local climate activists. We aim to continue getting feedback on the Alpha throughout December, and maybe January, after which point we can make a decision on whether to develop and launch a full service later in 2023.
Climate Action Scorecards 2023
Speaking of next year, preparations are already underway for next year’s follow-up to the Council Climate Scorecards project—this month saw Lucas and I work with Climate Emergency UK to design and publish their draft methodology for the assessment that will begin next year.
With CEUK’s assessors now looking at councils’ climate actions, in addition to their plans, we wanted to make it as easy as possible to understand precisely which questions your local authority will be scored on. I think we came up with a nice solution, where you can filter the list of draft questions by your local authority name or postcode, as well as by local authority type.
Sharing our data and tools
In other news, Alex updated our deprivation and urban/rural classification datasets to show relative figures for local authorities and Westminster parliamentary constituencies. We also published a local authorities lookup dataset that makes it easy to convert between the many names and codes used to identify local authorities.
If you want to use these new datasets—or any of our data in fact—Alex runs drop-in office hours on Thursdays and Fridays to talk about just that. We’re also happy to help collect or analyse climate-related data for free, as part of our work on supporting the UK’s climate data ecosystem – you can read more about that here.
Speaking of data ecosystems, you’ll now find a number of mySociety’s open climate datasets listed in Subak’s Data Catalogue, and Icebreaker One’s OpenNetZero catalogue.
Finally, Myf and Siôn in particular have continued to share and talk about our tools, and how people are using them to support local climate action, this month. Highlights include attending the Natural History Consortium’s Communicate conference; giving a hands-on workshop about all of mySociety’s tools for London’s small charities and community groups at Superhighways’ “Where’s The Power In Data” conference; and publishing a really exciting case study about how an officer at Surrey County Council used CAPE to share experiences and best practices with other similar councils elsewhere the UK.
We spoke to Rebecca Sawyer of Brighton Peace and Environment Centre, to discover how their ReForest Brighton project interfaces with our own CAPE and Council Climate Scorecards sites.
As it turns out, our projects have a lot in common. Both aim to make it easier for everyone to understand and assess the progress a council is making towards cutting carbon emissions, a field where the picture can be complicated and difficult for the average person to follow. That starts with data.
Rebecca explained, “Identifying the path to carbon neutrality is not straightforward, and the data that would enable organisations to know where they currently are on this path is very weak.”
To address this, ReForest Brighton is developing an interactive website to show in real time the progress that each local authority has made in relation to its individual carbon neutral targets.
Naturally, the project began by looking at the organisation’s hometown of Brighton, which has a target of Net Zero carbon emissions by 2030. That’s just the start, though; the model is replicable for any other local authority in the UK, allowing their own carbon neutral targets and the actions determined by their climate action plans to be slotted in.
“The aim,” says Rebecca, “is to provide real time quality data that will enable decision making around policy and practice.
“So for example, if it’s clear that maintaining the current level of action won’t bring a city to carbon neutrality by their set date, the council can refocus their efforts to reduce emissions and sequestrate more carbon.”
The ultimate target? “To make local authority councils more ambitious.”
Carbon neutral dates
So where do CAPE and the Scorecards site come in? As Rebecca explained, CAPE was useful mainly for a single datapoint amongst the many that it provides.
“The main way we’ve been using it is to retrieve the carbon neutral dates of all the individual local authorities in the UK.
“Without this data being easily accessible it would’ve taken us a long time and lot of resources to go through more than 300 local authorities and dig out their target dates.”
And as for the Scorecards site, this has been more of a sanity-check tool: “We used it once we’d completed our calculations, to check our ratings of each local authority against the Scorecards rating.
“For example, if our calculations rated a local authority with high climate action but the Scorecards had it as low, then we’d analyse and reassess our ratings.”
As well as the interactive map, their project will produce predictive data to show how much progress the council will have made by their target zero emissions date.
For those who like the technical details, Rebecca is keen to oblige: “Our categorisation is based on a calculation of emission trends from 2016-2020. The trends allow us to predict where each local authority will be by the carbon neutral target date we downloaded from CAPE, using the ‘forecast’ formula (=FORECAST (x, known_ys, known_xs)).
“There is actually 15 years’ worth of emissions data available, but we chose this five-year period because climate action has only started becoming a consideration for local authorities in the last few years.
“Basically, we look at the predicted emissions on the authority’s carbon neutral date and categorise them accordingly — and if a local authority had no carbon neutral target or plans, it is automatically rated zero.”
A knock-on effect
ReForest Brighton wants to make it easier for the public to understand how their local authority is doing in achieving its carbon reduction targets — and they have another aim, too:
“We would like the public to push local governments to take faster, more effective action, and we’re planning to help them do this by giving them the means to write to their elected representatives, and to share the website with their friends and contacts.
“But even while hoping that councils will be making as much progress as possible, we’re also pushing for transparency. We’d even encourage an authority to push their carbon neutral target date further back if it gave a more honest picture of where they are at.”
Brighton Peace and Environment Centre are a registered charity and they welcome volunteers: get in touch if you would like to know more.
You can also make a donation to them, using this link.
Image: Aaron Burden
Last Wednesday a varied audience convened online for our Innovations In Climate Tech event.
The aim: to showcase some of the remarkable and effective projects being implemented in the UK and further afield, and to spark inspiration so that these, or similar projects, might be replicated in other UK regions.
mySociety has three £5,000 grants to give to innovators and local councils who work together and trial something they’ve seen, or been inspired by, during the event.
Missed the live version? Don’t worry: we have videos and notes, and you don’t have to have attended to be able to bid for a grant.
Rewatch or read up
Here’s where to find the various assets from the day:
- Watch a video of all the morning presentations, followed by the Q&A. This video features:
- Annie Pickering from Climate Emergency UK, on how they scored councils’ Climate Action Plans;
- Ariane Crampton from Wiltshire County Council on how they tackled outreach to diverse communities with their climate consultation;
- Claus Wilhelmsen from Copenhagen City Council on the practical ways in which they are tackling carbon cutting within construction industries;
- Ornaldo Gjergji from the European Data Journalism Network on how visualising temperature data from individual cities and towns helps people better understand the impacts of climate change;
- Kasper Spaan from Waternet on creating green roofs across the city to aid urban cooling and biodiversity.
- The afternoon breakout sessions weren’t recorded, but you can read notes of the presentations and subsequent discussions for:
- The Adaptation session (Padlet here) in which Josh Shimmin from Atamate talked about a data-driven approach to retrofitting housing;
- The Engagement session (Padlet here) in which Susan Rodaway from Pennard Community Council presented on a community consultation tool that helped them decide what to spend budget on; and Arnau Quinquilla from Climate 2025 talked about mapping climate movements across the world;
- The Spatial Planning session (Padlet here) in which Lora Botev from CitizenLab explained how their software enables councils to run consultations and grow an active group of residents who have a voice in decisions around climate;
- The Equity, Diversion and Inclusion session (Padlet here) in which Emma Geen from the Bristol Disability Equality Forum explained how vital it is to include disabled people in a green transition, and the ways in which the group has taken action to make this happen.
On 19 October, we’ll be running an informal session online, explaining what we’re looking for in a grant pitch, and giving you the chance to explore your ideas with potential partners. Then, if you want to go forward and bid for one of three £5,000 grants, we’ll give you everything you need to make your pitch.
You do not have to have attended the first event to join us at this stage. Please explore the resources above, add your thoughts to the Padlets, and sign up for this event via Eventbrite.
What sort of projects will we be funding?
To be eligible to bid for one of the grants, you must either be:
- a council that wants to trial an idea; or
- an organisation that wants to work with a council to trial your project.
Partnerships can be between two or more organisations, but every partnership must include at least one local council (and might only consist of councils). But don’t worry if you haven’t got a partner in mind yet – you may find one at this event.
- You might have seen an idea in the presentations that is directly applicable to your council area, and want to simply replicate it.
- Or, in a less straightforward but equally valid scenario, you might simply have seen an organisation you’d like to work with, or had a completely new idea sparked by something you saw.
- You might have no ideas at all, but a commitment to try something new… bring an open mind and see if anything at the event grabs you!
Sign up for the upcoming event here.
- Watch a video of all the morning presentations, followed by the Q&A. This video features:
This is a more technical blog post in companion to our recent blog about local climate data. Read on if you’re interested in the tools and approaches we’re using in the Climate team to analyse and publish data.
How we’re handling common data analysis and data publishing tasks.
Generally we do all our data analysis in Python and Jupyter notebooks. While we have some analysis using R, we have more Python developers and projects, so this makes it easier for analysis code to be shared and understood between analysis and production projects.
Following the same basic ideas as (and stealing some folder structure from) the cookiecutter data science approach that each small project should live in a separate repository, we have a standard repository template for working with data processing and analysis.
The template defines a folder structure, and standard config files for development in Docker and VS Code. A shared data_common library builds a base Docker image (for faster access to new repos), and common tools and utilities that are shared between projects for dataset management. This includes helpers for managing dataset releases, and for working with our charting theme. The use of Docker means that the development environment and the GitHub Actions environment can be kept in sync – and so processes can easily be shifted to a scheduled task as a GitHub Action.
The advantage of this common library approach is that it is easy to update the set of common tools from each new project, but because each project is pegged to a commit of the common library, new projects get the benefit of advances, while old projects do not need to be updated all the time to keep working.
This process can run end-to-end in GitHub – where the repository is created in GitHub, Codespaces can be used for development, automated testing and building happens with GitHub Actions and the data is published through GitHub Pages. The use of GitHub Actions especially means testing and validation of the data can live on Github’s infrastructure, rather than requiring additional work for each small project on our servers.
One of the goals of this data management process is to make it easy to take a dataset we’ve built for our purposes, and make it easily accessible for re-use by others.
The data_common library contains a
datasetcommand line tool – which automates the creation of various config files, publishing, and validation of our data.
Rather than reinventing the wheel, we use the frictionless data standard as a way of describing the data. A repo will hold one or more data packages, which are a collection of data resources (generally a CSV table). The dataset tool detects changes to the data resources, and updates the config files. Changes between config files can then be used for automated version changes.
Leaning on the frictionless standard for basic validation that the structure is right, we use pytest to run additional tests on the data itself. This means we define a set of rules that the dataset should pass (eg ‘all cells in this column contain a value’), and if it doesn’t, the dataset will not validate and will fail to build.
This is especially important because we have datasets that are fed by automated processes, read external Google Sheets, or accept input from other organisations. The local authority codes dataset has a number of tests to check authorities haven’t been unexpectedly deleted, that the start date and end dates make sense, and that only certain kinds of authorities can be designated as the county council or combined authority overlapping with a different authority. This means that when someone submits a change to the source dataset, we can have a certain amount of faith that the dataset is being improved because the automated testing is checking that nothing is obviously broken.
The automated versioning approach means the defined structure of a resource is also a form of automated testing. Generally following the semver rules for frictionless data (exception that adding a new column after the last column is not a major change), the dataset tool will try and determine if a change from the previous version is a MAJOR (backward compatibility breaking), MINOR (new resource, row or column), or PATCH (correcting errors) change. Generally, we want to avoid major changes, and the automated action will throw an error if this happens. If a major change is required, this can be done manually. The fact that external users of the file can peg their usage to a particular major version means that changes can be made knowing nothing is immediately going to break (even if data may become more stale in the long run).
Data publishing and accessibility
The frictionless standard allows an optional description for each data column. We make this required, so that each column needs to have been given a human readable description for the dataset to validate successfully. Internally, this is useful as enforcing documentation (and making sure you really understand what units a column is in), and means that it is much easier for external users to understand what is going on.
Previously, we were uploading the CSVs to GitHub repositories and leaving it as that – but GitHub isn’t friendly to non-developers, and clicking a CSV file opens it up in the browser rather than downloading it.
To help make data more accessible, we now publish a small GitHub Pages site for each repo, which allows small static sites to be built from the contents of a repository (the EveryPolitician project also used this approach). This means we can have fuller documentation of the data, better analytics on access, sign-posting to surveys, and better sign-posted links to downloading multiple versions of the data.
The automated deployment means we can also very easily create Excel files that packages together all resources in a package into the same file, and include the meta-data information about the dataset, as well as information about how they can tell us about how they’re using it.
Publishing in an Excel format acknowledges a practical reality that lots of people work in Excel. CSVs don’t always load nicely in Excel, and since Excel files can contain multiple sheets, we can add a cover page that makes it easier to use and understand our data by packaging all the explanations inside the file. We still produce both CSVs and XLSX files – and can now do so with very little work.
For developers who are interested in making automated use of the data, we also provide a small package that can be used in Python or as a CLI tool to fetch the data, and instructions on the download page on how to use it.
At mySociety Towers, we’re fans of Datasette, a tool for exploring datasets. Simon Willison recently released Datasette Lite, a version that runs entirely in the browser. That means that just by publishing our data as a SQLite file, we can add a link so that people can explore a dataset without leaving the browser. You can even create shareable links for queries: for example, all current local authorities in Scotland, or local authorities in the most deprived quintile. This lets us do some very rapid prototyping of what a data service might look like, just by packaging up some of the data using our new approach.
Something in use in a few of our repos is the ability to automatically deploy analysis of the dataset when it is updated.
Analysis of the dataset can be designed in a Jupyter notebook (including tables and charts) – and this can be re-run and published on the same GitHub Pages deploy as the data itself. For instance, the UK Composite Rural Urban Classification produces this analysis. For the moment, this is just replacing previous automatic README creation – but in principle makes it easy for us to create simple, self-updating public charts and analysis of whatever we like.
Bringing it all back together and keeping people to up to date with changes
The one downside of all these datasets living in different repositories is making them easy to discover. To help out with this, we add all data packages to our data.mysociety.org catalogue (itself a Jekyll site that updates via GitHub Actions) and have started a lightweight data announcement email list. If you have got this far, and want to see more of our data in future – sign up!
One of the things we want to do as part of our Climate programme is help build an ecosystem of data around local authorities and climate data.
We have a goal of reducing the carbon emissions that are within the control of local authorities, and we want to help people build tools and services that further that ambition.
We want to do more to actively encourage people to use our data, and to understand if there are any data gaps we can help fill to make everyone’s work easier.
So, have we already built something you think might be useful? We can help you use it.
Also, if there’s a dataset that would help you, but you don’t have the data skills required to take it further, we might be able to help build it! Does MapIt almost meet your needs but not quite? Let’s talk about it!
You can email us, or we are experimenting with running some drop-in hours where you can talk through a data problem with one of the team.
You can also sign up to our Climate newsletter to find up more about any future work we do to help grow this ecosystem.
Making our existing data more accessible
Through our previous expertise in local authority data, and in building the Climate Action Plan Explorer, we have gathered a lot of data that can overcome common challenges in new projects.
- A swiss-army knife/skeleton key/useful spreadsheet that lists all current local authorities, and helps transform data between different lookups.
- Mapit An API that can take postcodes and tell you which local authority they’re in (and much more!) Free for low traffic charitable projects.
- Datasets of which authorities have published climate action plans.
- Datasets of which authorities have published net zero dates, and their scopes.
- A massive 1GB zip of all the climate plans we know about.
- Measure of local deprivation across the whole UK.
- A simplified version of the BEIS local authority emissions data.
- Measures of similarity between all local authorities (emissions, deprivation, distance, rural/urban and then all of those things together).
All of this data (plus more) can be found on our data portal.
We’ve also been working to make our data more accessible and explorable (example):
- Datasets now have good descriptions of what is in each column.
- Datasets can be downloaded as Excel files
- Datasets can be previewed online using Datasette lite.
- Providing basic instructions on how to automatically download updated versions of the data.
If you think you can build something new out of this data, we can help you out!
Building more data
There’s a lot of datasets we think we can make more of — for example, as part of our prototyping research we did some basic analysis of how we might use Energy Performance Certificate data (for home energy in general, and specific renting analysis).
But before we just started making data, we want to make sure we’re making data that is useful to people and that can help people tell stories, and build websites and tools. If there’s a dataset you need, where you think the raw elements already exist, get in touch. We might be able to help you out.
If you are using our data, please tell us you’re using our data
We really believe in the benefit of making our work open so that others can find and build on it. The big drawback is that the easier we make our data to access, the less we know about who is using it.
This is a problem, because ultimately our climate work is funded by organisations who would like to know what is happening because of our work. The more we know about what is useful about the data, and what you’re using it for, the better we can make the case to continue producing it.
Each download page has a survey that you can fill out to tell us about how you use the data. We’re also always happy to receive emails!
Stay updated about everything
Our work growing the ecosystem also includes events and campaigning activity. If you want to stay up to date with everything we do around climate, you can sign up to our newsletter.
Image: Emma Gossett
There’s lots, as ever, to report from the Climate team this month, so I’ll try to pick some highlights… this time with a Shakespearean flavour, as I (mySociety’s Liverpool correspondent) celebrate the opening of the Shakespeare North Playhouse in the nearby town of Prescot. May the bard’s lyrical visions propel us into a summer of climate action!
All things are ready, if our mind be so
In the previous monthnotes Jen trailered Innovations in Climate Tech – our online, half-day event, featuring inspirational examples and discussion about how civic tech projects are supporting climate action around the world, and how we might be able to seed more projects like this, with the cooperation of local authorities, here in the UK.
This month Jen’s been lining up speakers for the event (which takes place on 21st September), and Siôn has been planning how we can use workshops in the second half of the event to share best practice and build more connections between technologists and local authority officials.
If you’re from a local authority, or you’ve been involved in a climate-related technology project, and you’d like to share your work at the event, there’s still time to submit a proposal for inclusion in the programme.
We’re also excited to find we’ve been accepted to speak at the upcoming Code for All 2022 Summit (also happening in September), so we’re looking forward to working our sessions there into our wider plan for building connections between the climate and civic tech communities.
And finally, to complete the Summer events trifecta, we’ve been laying plans for an informal online get-together about energy efficiency and retrofit, since it’s proved such a popular subject during our prototyping weeks, and we’d really like to find the most impactful contribution we could make in the space, especially with fuel costs expected to continue rising well into 2023. If this interests you, share your availability for the week in which we’re planning to meet and join our climate updates newsletter to hear how things develop.
Once more unto the breech dear friends
All good things must come to an end – and our series of six rapid prototyping weeks has certainly been a good thing! This month we’ve been preparing for the final week in the series, focussing on how improved collection and sharing of MP, constituency, and local climate action data, between environmental charities and organisations, could enhance public understanding of climate challenges and solutions, and build networks across local communities.
We’re really excited to be working on this with a number of really big names in the space—including The Climate Coalition, Green Alliance, Friends of the Earth, the Wildlife Trusts, Hope for the Future, WWF, and Climate Outreach—and we’re really excited to see what recommendations come out of the week.
We’re also putting the final touches to our write-ups of the last two prototyping weeks (on fair transition and energy efficiency for private rental tenants) and will be posting them on our Climate Prototyping page shortly.
Friends, romans, countrymen, lend us your ears!
Siôn has been sharing our procurement and energy efficiency prototypes with a whole range of organisations, getting their input on next steps we should take, and potential collaboration opportunities. So far we’re excited to have met with the Centre for Local Economic Strategies, UK Green Building Council, Architects Climate Action Network, Living Rent, Energy Local, Connected Places Catapult and Citizens UK.
Meanwhile, Myf has been renewing our efforts to promote CAPE to journalists, as one of the core audiences where we think up-to-date, accessible data on local authority climate action could really enable a new level of scrutiny and cross-pollination of climate actions around the UK. We’re looking to potentially speak at a few journalism conferences in the coming months, and we’re planning to prepare a set of online resources that might give journalists an idea of how they can use our data to find stories.
We also presented CAPE and the Scorecards at Friends of the Earth’s Environmental Data for Change event—which I was honoured to be asked to facilitate on FoE’s behalf—right at the end of June. It was an absolutely packed call, which left everyone buzzing with ideas for the future. We’re continuing to work with Friends of the Earth, and other attendees from the event, on how we take the this great momentum, and shape a community of practice around sharing and building on the rich environmental data available in the UK, to power more informed climate action.
Photo by Red Zeppelin on Unsplash.
It’s the end of June already and we’re now over half way through the year, the solstice has passed and the days are starting to get shorter! Since the start of April the Climate team have been in a whirl of prototyping weeks which has made time feel like it’s speeding past at a high rate.
So what have we done this month?
Trialing Github projects
Being an open source technical organisation, mySociety does a lot of its development work in GitHub, but on the Climate team we were using a mixture of Trello, spreadsheets and documents to track our priorities and progress. Having everything spread across so many places was causing the team confusion when it came to updating on progress and figuring out which tasks were the next most important.
So, at the start of June we switched to trialling GitHub’s Projects feature. This seems to answer a lot of our needs right now – everything is in one place, we can use status labels to track the progress on the project and add custom ones which relate to project milestones. It has the bonus effect that we’re not doubling up work by having the same tickets in GitHub and Trello. We’re only two sprints in so far, so still early days but we’re hopeful this might be a simpler way of working.
There’s only been one prototyping week in June: A fair transition. This was a tough week as it was such a broad subject and it was difficult to work out what exactly would be most useful for us to work on. This is what we came up with.
We’ve also been planning for Week 5 – Energy efficiency for rental homes which takes place from 5 -11 July. There’s still time to apply if you’re interested in joining us on this one!
It’s been a busy month for Communications – we’ve put together a pitch for MG OMD, the global marketing agency that will be volunteering their time for us through the Weston Communicating Climate training programme that Myf, our Communications Manager, has been following. It gives us the opportunity to have a big agency input into our plans and maybe give us ideas for new ways of reaching people.
Myf has also been working on some case studies – one from Sustain and one from Green Finance Institute. They’ll really help to highlight why the climate action plan data we have is so important to making positive change on reducing local climate emissions.
Alex has been working hard on our data ecosystem and we now have the local authority data up in a better format. You can find it here: https://mysociety.github.io/uk_local_authority_names_and_codes/
Finally we’ve been working on events. We have our first Prototyping Show and Tell on Friday 1 July from 2pm – 3:30pm BST: do drop us a line to be added to the event if you want to come along and hear all about how prototyping works and what we’ve found.
We’ve also started looking at our September event, Innovations in Climate Change, which will be held on September 21 2022 on Zoom. We’re super excited about this and our aim is to bring together local councils, international actors and technology people to share their tech based climate change projects and hopefully inspire some new work to reduce local climate emissions. If any of that sounds like you, sign up to present or keep your eyes peeled for an Eventbrite page to register your attendance.
Image: Natosha Benning