As we move into the season of the falling leaves, we look back on the activities that fell in September.
Most importantly we welcomed Alexander to the team, doubling both our developer count and the number of people on the team named Alex.
Events dear boy, events
We ran an event! About Climate Tech! It seemed to go quite well. There’s lots of detail in the blog post and links so you can rewatch people from Wiltshire to Copenhagen talking about how they used technology to help with everything from green roofs to community consultation.
The post also contains details of our follow up event about the small grants (£5,000) we have available for local councils and partners for trialling ideas for tackling climate change.
Internally we spent a bit of time thinking about how we might use some futures scenarios to test out our plans and explore any unspoken assumptions we might have about the way the world works. Failing that we could always use said scenarios to help run a creative writing workshop on dystopian fiction.
The work goes on
We have come to the end of our prototyping weeks and we’re now starting to look at exploring some of them in more detail. The focus at the moment is on home energy, procurement and our most recent prototyping work with The Climate Coalition.
On the home energy front, Siôn has been continuing to speak to potential partners in the area while we work out the best way to turn this work into something concrete. If encouraging local communities to come together and improve the energy efficiency of their homes sounds interesting to you then get in touch.
Wasting no time, Alexander has been unknotting procurement and contracts data in order to turn our Contract Countdown prototype into something a little more functional. We’re still at an early stage with this, trying to work out if it’s practical to keep the data current. We’ll also be looking to show the more useful version to some potential users to see if it’s a service that has value.
Finally, we started work with The Climate Coalition on a beta of a tool to help them corral a range of data to more effectively help climate groups with campaigning. So far we’ve largely been talking about what data is both useful and available, and how to link it all up.
In non-prototyping work we’ve continued to chat to Climate Emergency UK about next year’s follow up to the Council Climate Plan Scorecards. This is very much in the planning stage at the moment.
Previously in blog posts
One of the side effects of our work on Climate is we’ve gathered a lot of data which we’d like more people to use. Alex wrote both about the data we have and also the process we use to gather and publish it. The first of these is of interest to anyone who would like some nice data, while the second is considerably more technical.
Speaking of people using our data, Myf published the latest in our series of case studies on how people are doing just that. This month it’s the turn of the Brighton Peace and Environment centre who’ve been using CAPE and the Council Climate Plan Scorecards to help with visualising council’s progress towards their Net Zero targets.
As ever, if you’ve used any of our data we’d love to hear from you. It helps us with both prioritising future work as well as when talking to current and potential funders.
If you’d like this sort of thing in your inbox then you can sign up to our monthly climate newsletter by clicking the subscribe link at the top right of that page.
Image: Mott Rodeheaver
Suppose you’ve bought a Land Rover but you don’t know anything of its history. Might it have had an exciting past as a military vehicle? Has it had any major faults in its previous life? Freedom of Information is one way to find out.
And indeed, a number of people have been using WhatDoTheyKnow to discover more about the history of their ex-military Land Rovers by making a Freedom of Information request to the Ministry of Defence.
What information is available?
We’ve seen requests both for the military service history of a vehicle (for example which military units it was assigned to) and the maintenance record (details of inspections, servicing, faults and repairs).
Any information held is generally provided free of charge to anyone who asks — there is no requirement to prove that you are the owner or keeper of the vehicle. Here are some examples:
- Example of vehicle history released by the MOD
- Example of fault history released by the MOD
- Example of a request thread for the history of an ex-military landrover
How to make a request
Freedom of Information (FOI) requests can be made in public by using the WhatDoTheyKnow website to ask for information from the MOD. We would obviously prefer that people use our site, but FOI requests can also be made by writing a letter or sending an email.
Many of the requesters include both the chassis number and the registration number in the request, to help the MOD identify any relevant information held.
Will I get the information I ask for?
In some cases, information may be withheld using exemptions contained in the Freedom of Information Act 2000. For example, the MOD tends to redact the time taken to carry out repairs, to protect the commercial interests of the businesses they contract for this purpose.
In rare cases, information may be withheld in order to safeguard national security or to protect the UK’s defensive capabilities.
The MOD has publicly released a copy of the MERLIN database, in which details of military vehicles are recorded, and it may be worth checking that first if you are interested in making a request.
In the case of some previous FOI requests, where no information is held, the MOD has advised requesters that the Royal Logistic Corps Museum may be able to assist with their research (see for example this MOD letter of 25 September 2018).
Reasons that public authorities keep records about assets they no longer hold
The requests about ex-military Land Rovers highlight the fact that public authorities often hold records about assets they no longer own, and that in some cases this information will be of value to the new owners.
There are various reasons why records are kept after the asset is sold or otherwise disposed of. One is to help people who may have queries in the future. A great example of this practice is referenced in a response by Aberdeen Council to an FOI request made through WhatDoTheyKnow in June 2021.
“The electronic record held by Aberdeen City Council […] not only lists everything we have, but also everything that we once had as well. This is to ensure that we have a records trail for future research/provenance etc”.
Usage of FOI law in the UK
We think the requests about ex-military Land Rovers are interesting because they show the versatility of the UK’s FOI legislation.
Not every FOI request has to be about holding public authorities to account: requests can simply be made for information that people find useful for their businesses, their hobbies or their everyday lives. Making such requests in public helps other people who might be interested in making a similar request themselves. In addition, if there is an important public interest story hidden in the response, making the request in public maximises the chances that someone will find it.
Image: AlfvanBeem, CC0, via Wikimedia Commons
As it turns out, our projects have a lot in common. Both aim to make it easier for everyone to understand and assess the progress a council is making towards cutting carbon emissions, a field where the picture can be complicated and difficult for the average person to follow. That starts with data.
Rebecca explained, “Identifying the path to carbon neutrality is not straightforward, and the data that would enable organisations to know where they currently are on this path is very weak.”
To address this, ReForest Brighton is developing an interactive website to show in real time the progress that each local authority has made in relation to its individual carbon neutral targets.
Naturally, the project began by looking at the organisation’s hometown of Brighton, which has a target of Net Zero carbon emissions by 2030. That’s just the start, though; the model is replicable for any other local authority in the UK, allowing their own carbon neutral targets and the actions determined by their climate action plans to be slotted in.
“The aim,” says Rebecca, “is to provide real time quality data that will enable decision making around policy and practice.
“So for example, if it’s clear that maintaining the current level of action won’t bring a city to carbon neutrality by their set date, the council can refocus their efforts to reduce emissions and sequestrate more carbon.”
The ultimate target? “To make local authority councils more ambitious.”
Carbon neutral dates
So where do CAPE and the Scorecards site come in? As Rebecca explained, CAPE was useful mainly for a single datapoint amongst the many that it provides.
“The main way we’ve been using it is to retrieve the carbon neutral dates of all the individual local authorities in the UK.
“Without this data being easily accessible it would’ve taken us a long time and lot of resources to go through more than 300 local authorities and dig out their target dates.”
And as for the Scorecards site, this has been more of a sanity-check tool: “We used it once we’d completed our calculations, to check our ratings of each local authority against the Scorecards rating.
“For example, if our calculations rated a local authority with high climate action but the Scorecards had it as low, then we’d analyse and reassess our ratings.”
As well as the interactive map, their project will produce predictive data to show how much progress the council will have made by their target zero emissions date.
For those who like the technical details, Rebecca is keen to oblige: “Our categorisation is based on a calculation of emission trends from 2016-2020. The trends allow us to predict where each local authority will be by the carbon neutral target date we downloaded from CAPE, using the ‘forecast’ formula (=FORECAST (x, known_ys, known_xs)).
“There is actually 15 years’ worth of emissions data available, but we chose this five-year period because climate action has only started becoming a consideration for local authorities in the last few years.
“Basically, we look at the predicted emissions on the authority’s carbon neutral date and categorise them accordingly — and if a local authority had no carbon neutral target or plans, it is automatically rated zero.”
A knock-on effect
ReForest Brighton wants to make it easier for the public to understand how their local authority is doing in achieving its carbon reduction targets — and they have another aim, too:
“We would like the public to push local governments to take faster, more effective action, and we’re planning to help them do this by giving them the means to write to their elected representatives, and to share the website with their friends and contacts.
“But even while hoping that councils will be making as much progress as possible, we’re also pushing for transparency. We’d even encourage an authority to push their carbon neutral target date further back if it gave a more honest picture of where they are at.”
Brighton Peace and Environment Centre are a registered charity and they welcome volunteers: get in touch if you would like to know more.
You can also make a donation to them, using this link.
Image: Aaron Burden
Last Wednesday a varied audience convened online for our Innovations In Climate Tech event.
The aim: to showcase some of the remarkable and effective projects being implemented in the UK and further afield, and to spark inspiration so that these, or similar projects, might be replicated in other UK regions.
mySociety has three £5,000 grants to give to innovators and local councils who work together and trial something they’ve seen, or been inspired by, during the event.
Missed the live version? Don’t worry: we have videos and notes, and you don’t have to have attended to be able to bid for a grant.
Rewatch or read up
Here’s where to find the various assets from the day:
- Watch a video of all the morning presentations, followed by the Q&A. This video features:
- Annie Pickering from Climate Emergency UK, on how they scored councils’ Climate Action Plans;
- Ariane Crampton from Wiltshire County Council on how they tackled outreach to diverse communities with their climate consultation;
- Claus Wilhelmsen from Copenhagen City Council on the practical ways in which they are tackling carbon cutting within construction industries;
- Ornaldo Gjergji from the European Data Journalism Network on how visualising temperature data from individual cities and towns helps people better understand the impacts of climate change;
- Kasper Spaan from Waternet on creating green roofs across the city to aid urban cooling and biodiversity.
- The afternoon breakout sessions weren’t recorded, but you can read notes of the presentations and subsequent discussions for:
- The Adaptation session (Padlet here) in which Josh Shimmin from Atamate talked about a data-driven approach to retrofitting housing;
- The Engagement session (Padlet here) in which Susan Rodaway from Pennard Community Council presented on a community consultation tool that helped them decide what to spend budget on; and Arnau Quinquilla from Climate 2025 talked about mapping climate movements across the world;
- The Spatial Planning session (Padlet here) in which Lora Botev from CitizenLab explained how their software enables councils to run consultations and grow an active group of residents who have a voice in decisions around climate;
- The Equity, Diversion and Inclusion session (Padlet here) in which Emma Geen from the Bristol Disability Equality Forum explained how vital it is to include disabled people in a green transition, and the ways in which the group has taken action to make this happen.
On 19 October, we’ll be running an informal session online, explaining what we’re looking for in a grant pitch, and giving you the chance to explore your ideas with potential partners. Then, if you want to go forward and bid for one of three £5,000 grants, we’ll give you everything you need to make your pitch.
You do not have to have attended the first event to join us at this stage. Please explore the resources above, add your thoughts to the Padlets, and sign up for this event via Eventbrite.
What sort of projects will we be funding?
To be eligible to bid for one of the grants, you must either be:
- a council that wants to trial an idea; or
- an organisation that wants to work with a council to trial your project.
Partnerships can be between two or more organisations, but every partnership must include at least one local council (and might only consist of councils). But don’t worry if you haven’t got a partner in mind yet – you may find one at this event.
- You might have seen an idea in the presentations that is directly applicable to your council area, and want to simply replicate it.
- Or, in a less straightforward but equally valid scenario, you might simply have seen an organisation you’d like to work with, or had a completely new idea sparked by something you saw.
- You might have no ideas at all, but a commitment to try something new… bring an open mind and see if anything at the event grabs you!
- Watch a video of all the morning presentations, followed by the Q&A. This video features:
You can send Freedom of Information requests to more than 45,000 public authorities on WhatDoTheyKnow. For each of those authorities we need an email address to send those requests to, which means we often need to do some maintenance to keep everything up to date.
For some authorities in our database we don’t have a working email address. We might have had one in the past but it’s now out of date, or the authority might have merged and taken on new contact details – there are many reasons for missing email addresses, but they all leave us in the same predicament: we don’t know where to send your FOI requests for those bodies.
Can you help us find them?
If you have a little time to spare, a small amount of Googling could be a really big help for our users. Just five minutes here and there is all that’s needed to do a little bit of research to find the correct address.
The best starting point is almost always the authority’s website. Look for a dedicated contact email address for Freedom of Information requests.
Top tips for searching:
- Check the contact page.
- Check the footer on the homepage.
- Remember some public authorities such as schools and parish councils have very similar names, so make sure you are looking at the right one.
- If you can’t find a website for the authority itself, there are some other places that you can look: for example the NHS services site or the Get Information about Schools site.
Once you’ve found the right place, make a note of the contact email address. We prefer to use generic email addresses, for example that starting with foi@ or information@ as these tend not to change so often, so if there are multiple addresses given, these are the best ones to go for.
Let us know
If you find some of these missing email addresses please let us know.
We need both the new email address and the source (website address) where you found it, so we can verify the information.
You can send us this information by clicking on “Ask us to update FOI email” link on the public authority’s page. Just fill out the form with all the details that you’ve found.
Then our team of volunteers will use your input to update the database, and you’ll have ensured that people can make requests to the authority. That’s a really useful result.
Time poor but rich in other ways?
We know that your time is very precious and not everyone has the opportunity to help us out with tasks. If you are able to make a donation instead, that is also very helpful toward keeping our FOI service up and running.
Your contributions, however small, really help. Donate here.
Image: Marten Newhall
Data users in the UK often encounter fragmented public data, where public authorities are each spending money to publish data independently, but their outputs are difficult to find and join together. This means that a lot of effort is going into creating data which cannot be used to its full potential.
Our current thinking is that some level of central coordination is required to accompany a central mandate to publish. Public bodies need support and coordination to publish data to a lightweight common standard and in a common location.
We want to kick the wheels of this conclusion, identify existing attempts to fix the problem, and talk to people who produce and use the data to see what the obstacles are. If you’re interested, get in touch at firstname.lastname@example.org, or drop your views in this survey.
What is fragmented public data?
Fragmented public data is when many public authorities are required to publish the same data, but not to a common standard (structure) or in a single location – so data becomes fragmented across multiple locations and multiple formats.
In theory, this data should be being used by companies, researchers and journalists to provide insight into spending, spot fraud, and find opportunities to sell to councils.
But in practice, to use the data, you’ll need to search all 333 council websites each month, then import each spreadsheet into a central database – and you’ll spend a lot of time pulling your hair out, because the spreadsheets don’t use a consistent format.
As a result, not much has actually been done with all this data. And the grand promises that spending data would unleash an ‘army of armchair auditors’ have largely failed to materialise.
Why does this matter?
This is a problem because most of the effort is already being spent to do the job ineffectually. Councils do a lot of work to produce this data, and companies and analysts waste time fixing import scripts or crowdsourcing data, rather than creating new products or insights — and for many organisations, the skills and resources required to create national level datasets are beyond them.
This is not an isolated problem – there are many other examples of fragmented data.
For many datasets, while individual disclosures are useful, the combined data is much more than the sum of its parts because it allows real understanding of the picture across the whole country, and makes it easier to draw comparisons between different areas.
Across all these datasets the potential loss is huge – and just a bit of extra work could unlock huge amounts of the overall value of the data. We want to fix this for data that is already being published, and make sure that datasets in future are published in the best possible way.
So what can we do?
The big problem is one of coordination. We think the UK’s central data teams just need to help public authorities do two things:
- Use a lightweight common standard
- Report the location of the data in a central register.
By taking things that individual authorities are doing anyway, but getting an agreed format and location, all the individual datasets become far more useful. But the details of the standard are less important than the fact that the government and legislators should be as interested in this side of the problem as they are in requiring the data to be published in the first place.
We’re keen to learn lessons from previous attempts to do this, and reviewing old publication guides, our initial conclusion is that these over-complicated the idea of what open data is (with a fixation on file formats and linked standards), rather than simple interventions that help both publishers and users (where generally, the best approach is probably an Excel template with common headers). Common standards need to be a compromise between technical requirements and the people who work in local authorities who produce this data.
We’re encouraged by an approach taken by the Scottish Parliament, where publication of compliance with climate change duties was mandated in a particular format – and a consultation on this process found most organisations involved thought standard reporting was an improvement. We’re interested in any other examples of this kind of approach.
What are we doing now?
We’re trying to explore this problem a bit more, to understand the scale of the problem, and the viability of our approach.
We are interested in talking to:
- People who have run into this issue, what they were trying to do, if they were able to overcome it, or if they had to give up.
- People who publish information in public bodies to understand what their restrictions are
- People or organisations with experience in trying to coordinate a single standard or location for public data releases.
The TheyWorkForYou alerts system will send you an email every time your chosen keyword is mentioned in Parliament. A recent survey revealed that this system is being used by a broad range of different organisations and individuals. We’ve been speaking to a few of them to find out more.
First of these is Ben Leapman, Editor of Inside Time, the national newspaper for prisoners and detainees, circulated to all of the UK’s 141 prisons.
A unique publication
As Ben explains, “Each issue includes news, features, advice, puzzles – and eight pages of readers’ letters, which provide a fascinating insight into what’s on the minds of men and women behind bars.
“We’re a not-for-profit publication and a wholly-owned subsidiary of the New Bridge Foundation charity, which was founded in 1956 to create links between the offender and the community. We’re funded by advertising revenue. As far as we’re aware, no other country has a national prison newspaper. We’re unique!”
As Editor, Ben commissions articles, decides which stories go on which pages, fact-checks, and plenty more. But he also writes news stories. We were, of course, interested to hear how TheyWorkForYou alerts can help with this.
Parliamentary mentions of prisons
“I use the alerts service to monitor for the keywords “prison” – it’s as simple as that,” says Ben.
“Prisons are a crucial public service, but sadly they don’t get as much attention from politicians or voters as schools and hospitals – it’s a case of “out of sight, out of mind”. So the volume of daily mentions is manageable, and I’m able to look at them all.”
These simple alerts have resulted in Inside Time stories such as this one, about an innovative scheme to reduce violence, being trialled at 18 prisons.
“I don’t think there has been any public announcement or press release about it,” says Ben: “I hadn’t heard of it until I saw the parliamentary question.”
And here’s another recent story, this time prompted by a House of Lords debate in which Lord Farmer, who wrote two Government reports on the importance of family visits to the rehabilitation of prisoners, says that Covid restrictions in prison visits halls are doing harm.
Stories can arise from all types of parliamentary activity: “I’ve found news stories in Commons and Lords debates, Select Committee hearings, written answers to Parliamentary questions in the Commons and Lords, Scottish Parliament proceedings, even the proceedings of Bill committees.”
Communication is key
Finally, we asked Ben what he thinks the impact of such stories is.
“I’m a news journalist – I think it’s always important that people are well-informed. For the general public in a democracy, exposure to news is essential so that people can cast their vote in a well-informed way.
“In England, prisoners are denied the vote – but there are other ways that reading news can be a direct benefit. Say we report on a new course or initiative that’s happening at a particular prison. If one of our readers reads that story and likes the sound of it, they could apply to transfer to that prison – or they could ask staff why it’s not happening at their prison.
“Prisons are rather secretive places, they’re not great at communication – so it’s often the case that both prisoners and prison staff are unaware of things going on around their prison or in other prisons, both the good and the bad.”
Thanks very much to Ben for giving us these insights into how he uses TheyWorkForYou alerts in his work.
It’s certainly one area that we’d never have imagined before he filled in our survey — but we are very glad to know that our services are helping with the admirable aims of Inside Time.
This is a more technical blog post in companion to our recent blog about local climate data. Read on if you’re interested in the tools and approaches we’re using in the Climate team to analyse and publish data.
How we’re handling common data analysis and data publishing tasks.
Generally we do all our data analysis in Python and Jupyter notebooks. While we have some analysis using R, we have more Python developers and projects, so this makes it easier for analysis code to be shared and understood between analysis and production projects.
Following the same basic ideas as (and stealing some folder structure from) the cookiecutter data science approach that each small project should live in a separate repository, we have a standard repository template for working with data processing and analysis.
The template defines a folder structure, and standard config files for development in Docker and VS Code. A shared data_common library builds a base Docker image (for faster access to new repos), and common tools and utilities that are shared between projects for dataset management. This includes helpers for managing dataset releases, and for working with our charting theme. The use of Docker means that the development environment and the GitHub Actions environment can be kept in sync – and so processes can easily be shifted to a scheduled task as a GitHub Action.
The advantage of this common library approach is that it is easy to update the set of common tools from each new project, but because each project is pegged to a commit of the common library, new projects get the benefit of advances, while old projects do not need to be updated all the time to keep working.
This process can run end-to-end in GitHub – where the repository is created in GitHub, Codespaces can be used for development, automated testing and building happens with GitHub Actions and the data is published through GitHub Pages. The use of GitHub Actions especially means testing and validation of the data can live on Github’s infrastructure, rather than requiring additional work for each small project on our servers.
One of the goals of this data management process is to make it easy to take a dataset we’ve built for our purposes, and make it easily accessible for re-use by others.
The data_common library contains a
datasetcommand line tool – which automates the creation of various config files, publishing, and validation of our data.
Rather than reinventing the wheel, we use the frictionless data standard as a way of describing the data. A repo will hold one or more data packages, which are a collection of data resources (generally a CSV table). The dataset tool detects changes to the data resources, and updates the config files. Changes between config files can then be used for automated version changes.
Leaning on the frictionless standard for basic validation that the structure is right, we use pytest to run additional tests on the data itself. This means we define a set of rules that the dataset should pass (eg ‘all cells in this column contain a value’), and if it doesn’t, the dataset will not validate and will fail to build.
This is especially important because we have datasets that are fed by automated processes, read external Google Sheets, or accept input from other organisations. The local authority codes dataset has a number of tests to check authorities haven’t been unexpectedly deleted, that the start date and end dates make sense, and that only certain kinds of authorities can be designated as the county council or combined authority overlapping with a different authority. This means that when someone submits a change to the source dataset, we can have a certain amount of faith that the dataset is being improved because the automated testing is checking that nothing is obviously broken.
The automated versioning approach means the defined structure of a resource is also a form of automated testing. Generally following the semver rules for frictionless data (exception that adding a new column after the last column is not a major change), the dataset tool will try and determine if a change from the previous version is a MAJOR (backward compatibility breaking), MINOR (new resource, row or column), or PATCH (correcting errors) change. Generally, we want to avoid major changes, and the automated action will throw an error if this happens. If a major change is required, this can be done manually. The fact that external users of the file can peg their usage to a particular major version means that changes can be made knowing nothing is immediately going to break (even if data may become more stale in the long run).
Data publishing and accessibility
The frictionless standard allows an optional description for each data column. We make this required, so that each column needs to have been given a human readable description for the dataset to validate successfully. Internally, this is useful as enforcing documentation (and making sure you really understand what units a column is in), and means that it is much easier for external users to understand what is going on.
Previously, we were uploading the CSVs to GitHub repositories and leaving it as that – but GitHub isn’t friendly to non-developers, and clicking a CSV file opens it up in the browser rather than downloading it.
To help make data more accessible, we now publish a small GitHub Pages site for each repo, which allows small static sites to be built from the contents of a repository (the EveryPolitician project also used this approach). This means we can have fuller documentation of the data, better analytics on access, sign-posting to surveys, and better sign-posted links to downloading multiple versions of the data.
The automated deployment means we can also very easily create Excel files that packages together all resources in a package into the same file, and include the meta-data information about the dataset, as well as information about how they can tell us about how they’re using it.
Publishing in an Excel format acknowledges a practical reality that lots of people work in Excel. CSVs don’t always load nicely in Excel, and since Excel files can contain multiple sheets, we can add a cover page that makes it easier to use and understand our data by packaging all the explanations inside the file. We still produce both CSVs and XLSX files – and can now do so with very little work.
For developers who are interested in making automated use of the data, we also provide a small package that can be used in Python or as a CLI tool to fetch the data, and instructions on the download page on how to use it.
At mySociety Towers, we’re fans of Datasette, a tool for exploring datasets. Simon Willison recently released Datasette Lite, a version that runs entirely in the browser. That means that just by publishing our data as a SQLite file, we can add a link so that people can explore a dataset without leaving the browser. You can even create shareable links for queries: for example, all current local authorities in Scotland, or local authorities in the most deprived quintile. This lets us do some very rapid prototyping of what a data service might look like, just by packaging up some of the data using our new approach.
Something in use in a few of our repos is the ability to automatically deploy analysis of the dataset when it is updated.
Analysis of the dataset can be designed in a Jupyter notebook (including tables and charts) – and this can be re-run and published on the same GitHub Pages deploy as the data itself. For instance, the UK Composite Rural Urban Classification produces this analysis. For the moment, this is just replacing previous automatic README creation – but in principle makes it easy for us to create simple, self-updating public charts and analysis of whatever we like.
Bringing it all back together and keeping people to up to date with changes
The one downside of all these datasets living in different repositories is making them easy to discover. To help out with this, we add all data packages to our data.mysociety.org catalogue (itself a Jekyll site that updates via GitHub Actions) and have started a lightweight data announcement email list. If you have got this far, and want to see more of our data in future – sign up!
One of the things we want to do as part of our Climate programme is help build an ecosystem of data around local authorities and climate data.
We have a goal of reducing the carbon emissions that are within the control of local authorities, and we want to help people build tools and services that further that ambition.
We want to do more to actively encourage people to use our data, and to understand if there are any data gaps we can help fill to make everyone’s work easier.
So, have we already built something you think might be useful? We can help you use it.
Also, if there’s a dataset that would help you, but you don’t have the data skills required to take it further, we might be able to help build it! Does MapIt almost meet your needs but not quite? Let’s talk about it!
You can email us, or we are experimenting with running some drop-in hours where you can talk through a data problem with one of the team.
You can also sign up to our Climate newsletter to find up more about any future work we do to help grow this ecosystem.
Making our existing data more accessible
Through our previous expertise in local authority data, and in building the Climate Action Plan Explorer, we have gathered a lot of data that can overcome common challenges in new projects.
- A swiss-army knife/skeleton key/useful spreadsheet that lists all current local authorities, and helps transform data between different lookups.
- Mapit An API that can take postcodes and tell you which local authority they’re in (and much more!) Free for low traffic charitable projects.
- Datasets of which authorities have published climate action plans.
- Datasets of which authorities have published net zero dates, and their scopes.
- A massive 1GB zip of all the climate plans we know about.
- Measure of local deprivation across the whole UK.
- A simplified version of the BEIS local authority emissions data.
- Measures of similarity between all local authorities (emissions, deprivation, distance, rural/urban and then all of those things together).
All of this data (plus more) can be found on our data portal.
We’ve also been working to make our data more accessible and explorable (example):
- Datasets now have good descriptions of what is in each column.
- Datasets can be downloaded as Excel files
- Datasets can be previewed online using Datasette lite.
- Providing basic instructions on how to automatically download updated versions of the data.
If you think you can build something new out of this data, we can help you out!
Building more data
There’s a lot of datasets we think we can make more of — for example, as part of our prototyping research we did some basic analysis of how we might use Energy Performance Certificate data (for home energy in general, and specific renting analysis).
But before we just started making data, we want to make sure we’re making data that is useful to people and that can help people tell stories, and build websites and tools. If there’s a dataset you need, where you think the raw elements already exist, get in touch. We might be able to help you out.
If you are using our data, please tell us you’re using our data
We really believe in the benefit of making our work open so that others can find and build on it. The big drawback is that the easier we make our data to access, the less we know about who is using it.
This is a problem, because ultimately our climate work is funded by organisations who would like to know what is happening because of our work. The more we know about what is useful about the data, and what you’re using it for, the better we can make the case to continue producing it.
Each download page has a survey that you can fill out to tell us about how you use the data. We’re also always happy to receive emails!
Stay updated about everything
Our work growing the ecosystem also includes events and campaigning activity. If you want to stay up to date with everything we do around climate, you can sign up to our newsletter.
Image: Emma Gossett
[UPDATE: Since this post was published, the 19 September has been declared an official bank holiday – our assumption in the final section of this post, that ‘no bank holiday will be formally declared’, and the conclusions we came to as a result, were incorrect. We’ve updated the section accordingly.]
On 8 September 2022, Buckingham Palace announced the death of Her Majesty Queen Elizabeth II.
mySociety runs the WhatDoTheyKnow website, which lists many authorities related to the monarchy, the Royal household and associated offices.
As a non-partisan UK registered charity, we recognise that some of our users will view the monarchy as being a political institution, while others will not. We ask all our users to be respectful in their communications and to continue to follow our House Rules.
The constitution of the United Kingdom
When a monarch dies in the UK, they are immediately succeeded by their heir, even before the coronation has taken place. The Demise of the Crown Act 1901 provides that the holding of any office under the Crown is not to be affected in any way by the death of the reigning monarch.
What this means in practice is that Government ministers, civil servants and military personnel continue to hold office and that public authorities will continue to exist without interruption. Where this becomes relevant to WhatDoTheyKnow is that FOI requests do not need to be resubmitted.
The names of public authorities
Where appropriate, our volunteers will update the names of public authorities and the notes on the site to reflect the fact that King Charles Ⅲ is now the Sovereign of the United Kingdom. For example, we will rename the Queen’s Printer for Scotland to the King’s Printer for Scotland when the authority publicly updates its name.
Not all bodies with “Queen” in the name need to be renamed. For example, The Queen’s College, Oxford was named in honour of Queen Philippa of Hainault and does not require a change. A number of schools and other public bodies were named in honour of Queen Elizabeth II, such as the Queen Elizabeth II Conference Centre and we’d expect that they will continue to use their current names.
We’ve also updated the site to take account of the fact that Prince William is now the Duke of Cornwall. Under a royal charter from 1337, the position of Duke of Cornwall is held by the eldest son to the reigning monarch provided he is heir to the throne.
We have never listed the monarch as a public authority on WhatDoTheyKnow, but we continue to list the Royal Household.
We expect the Accession Council to convene shortly at St James’s Palace. The Accession Council is a body that meets twice following the death of a monarch. The purpose of the first meeting is to formally announce the death of the monarch and proclaim the succession of the new sovereign. The second meeting will be the first Privy Council meeting of the new monarch.
The day of the state funeral will be a day of national mourning in the UK, and has now been formally declared as a bank holiday.
This means that WhatDoTheyKnow will automatically treat 19 September as a non-working day for the purposes of calculating the time limits for complying with Freedom of Information requests. Even before we understood that the day would officially be a bank holiday, we had made this adjustment. This would have meant that our position differed from the position set out in FOI law; however, we believed it to be the most reasonable approach in the circumstances.
Bank holiday matters aside, we encourage people making FOI requests to recognise that some employees of public authorities will have a higher than normal workload at present and to be patient and courteous when dealing with public officials.