-
As part of our WhoFundsThem work we want to make better information available about money in politics.
Last year we released a report Beyond Transparency – looking at the UK Parliament’s register of financial interests, and wider arguments about how we fund politics.
Today we’re releasing a follow-up report: Leaky Pipes (read online or download as a PDF). This covers what we’ve learned (and what we think could be better) about the systems for reporting election donations. You can also re-watch the launch event on YouTube.
This report started because we were a bit confused about the different ways data could be declared and reported. And to be honest, we’re still a bit confused – but we have more diagrams to explain why.
What we explore in this report are the multiple routes for declarations, different thresholds for disclosure, and uneven public access. This makes cross-checking difficult and leaves gaps where information can vanish depending on how a donation flows (direct to candidate vs via party), how large it is, and whether the candidate wins.
The result is that candidates and agents face complex reporting requirements, electoral administrators hold paper-heavy returns that are hard to inspect, and the public (and sometimes regulators) struggle to build a consistent picture of who is funding whom.
From this, we’ve made recommendations on making reporting easier to do correctly, faster to publish, and simpler to scrutinise:
- Move to a “report once” process that informs multiple systems
- Harmonise public disclosure at £1,000
- Create a comprehensive public database above that threshold
- Create a safe private database below the threshold for research and evaluation purposes
Building on this, we suggest three practical avenues for follow-up work that would strengthen the case for reform and help design better systems:
- User research and prototyping to map how a “report once” service would work for candidates, agents, administrators, Parliament, and the Electoral Commission.
- Sampling local authority returns to demonstrate the scale and type of inconsistencies between routes.
- Exploring a data-sharing agreement for controlled research access to the Electoral Commission’s small-donor/return data.
The report can be read online or downloaded as a PDF.
-
Understanding more about constituent communication
We’ve released a new report exploring insights from WriteToThem about the content of constituent communication – you can read the whole report online or a summary below.
WriteToThem.com is a long-running mySociety service that enables people across the UK to contact their elected representatives by entering their postcode and sending a message through the site.
This service provides a unique opportunity to understand the flow of communication between many constituents and many representatives. Our WriteToThem Insights report uses surveys to understand more about what people are writing about.
While previous work identified patterns in response rates and deprivation gradients, this experiment focuses on understanding what people are writing about, distinguishing between casework (individual problem-solving) and campaigning (policy-oriented advocacy).
A new survey and data-processing pipeline were developed to categorise and anonymise message summaries, applying machine learning and large language model techniques to cluster and label topics. Analysis of 5,400 messages from Q3 2025 found:
- Casework and campaigning form two distinct types of communication, with casework more common for councillors and campaigning dominant for MPs.
- The deprivation gradients of these two types differ sharply: campaigning is concentrated in less deprived areas, while casework is more evenly distributed, though likely still underrepresents the most deprived groups.
- First-time users are more likely to send casework messages and to receive responses.
- Top themes in casework include housing, local services, health, and anti-social behaviour; in campaigning, issues such as Gaza, climate policy, and digital ID predominate.
This data has limits. This covers only a portion of total correspondence, and with little information about whether the sample is representative enough to generalise to messages sent in general. That said, we think there are strong uses both for improving WriteToThem itself and for informing broader understanding of constituent communication.
We want to build on this work: refining the analysis process and exploring opportunities to collaborate. We see particular value in digging more into casework data as something that could inform more systematic approaches in this area, helping representatives across the country join up information and improve collective scrutiny of government services.
The full report can be read here.
—
Image: Christopher Burns
-
The UK has two Freedom of Information laws – one that covers Scottish public authorities and one that covers public authorities in the rest of the UK. While similar to the UK law in many respects, we think there are a number of practical ways the Scottish system improves on the wider system of FOI in the UK.
While being better than the UK law is a good start, our sights should be set a lot higher than that: Freedom of Information needs to keep pace with how the world has changed, the changing ways public services are delivered, and huge shifts in how information can be stored and shared.
Currently there is a Private Member’s Bill going through the Scottish Parliament with a combination of practical fix-ups to problems that have emerged, and bigger picture changes to encourage better proactive publication of information.
Last month, the Scottish Parliament’s Standards, Procedures and Public Appointments Committee invited views on the Freedom of Information Reform (Scotland) Bill, which aims to modernise and strengthen the existing law. Our submission welcomed the Bill as a timely and proportionate improvement to an already effective system.
In addition to our written evidence, we were delighted to be invited to give oral evidence to the committee. You can watch Alex’s evidence session here.
Overall we’re really supportive of this effort to update the FOI system in Scotland, and as Alex said to the committee, we’re especially pleased to see proposals for a new proactive publication duty.
This change would help public bodies make information available as a matter of course, reducing the need for requests and ensuring that information, once released, is accessible to everyone. In our research on fragmented public data, we’ve shown how better coordination and consistent publication practices can unlock huge public value. The Bill’s provisions around proactive publication are a welcome step towards achieving this.
This feels like a key moment for transparency enthusiasts to unite around the opportunity to make Scotland’s FOI system even better, and we’re delighted to play our part.
—
Image: Chris Flexen
-
Our WriteToThem website makes it easy for anyone to send an email to their elected representatives. That’s the core concept, and it works brilliantly for millions of users every year — but that said, we’re aware that even when a website is simple and built with usability at its core, not everyone has an equal ability to access it.
As part of our warm up for a new programme of work on WriteToThem, we’ve published some of our internal research from a few years ago on messages written on behalf of someone else — what we’re calling ‘proxy use’.
The reasons for this are easy to understand: the primary subject may not be confident at writing in English; may be elderly or have a condition that makes it easier to delegate the task of writing; or may generally use internet services through intermediaries.
The key findings are:
- A small group of users (5%) were writing on behalf of someone else.
- Proxy messages made up 6.8% of messages to local councillors, and 4.5% of messages to MPs. This would account for an estimated 55,000 messages to MPs through the service in 2019.
- The largest group was people who were writing on behalf of family (40%), but there were also people writing on behalf of local groups (40%), friends or people they knew (12%), and service providers writing to representatives on behalf of clients (8%). Messages on behalf of clients from carers would have accounted for an estimated 7,500 messages in 2019.
We’re about to embark on research and development work around WriteToThem, and these findings will contribute to our understanding around making it easier to get the right type of message to the right place.If you are interested to dig deeper into everything we discovered around proxy usage, take a look at the full piece of research here.
-
I’ve written before about how we’re thinking about “low resource” use of Large Language Models (LLMs) — and where some of the benefits of LLMs can be captured without entering the “dependent on external API” vs “need new infrastructure to run internally” trade-offs.
One of the use cases we have for LLMs is categorisation: across parliamentary data in TheyWorkForYou, and FOI data in WhatDoTheyKnow we have a lot of unstructured text that it would be useful to assign structured labels to, for either public facing or internal processes.
This blog post is a write up of an experiment (working title: RuleBox) that uses LLMs to create classification rules, which can then be run on traditional computing infrastructure without dependence on external APIs. This allows large-scale text categorisation to run quickly and cheaply on traditional hardware without ongoing API dependencies.
Categorising Early Day Motions
We have a big dataset of parliamentary Early Day Motions (EDMs), which are formally ‘draft motions’ for parliamentary discussion but effectively work as an internal petition tool where MPs can signal their interest or support in different areas.
For our tools like the Local Intelligence Hub (LIH) we highlight a few EDMs as relevant to indicating if an MP has a special interest in an area of climate/environmental work. We want to keep these up to date better, and to have a pipeline that’s flexible for future versions of the LIH that might focus on different sectors. We want to be able to tag existing and new EDMs depending if they relate to climate/environmental matters, or other domains of interest.
A very simple approach would just be to plug into the OpenAI API and store some categories each day, but this is giving us a dependency and ongoing cost. What we’ve experimented with instead is an approach where we use the OpenAI API to bootstrap a process. We’ve used the commercial LLM to add categories to a limited set of data, and then seen how we can use that to create rules to categorise the rest.
Machine learning and text classification
Regular expressions and text processing rules
The “traditional” way of classifying lots of text automatically is to use text matching or regular expressions.
Regular expressions are a special format for defining when a set of text matches a pattern (which might be “contains one of these words” or “find the thing that is structured like an email address”).
The advantage of this approach is that you can see the rules you’ve added and at this point the underlying technical implementations are really fast. The disadvantage is that you might need to add a lot of edge cases manually, and regular expression syntax is not always clear to understand.
Machine learning
The use of “normal” machine learning provides a new tool. Here, models that have already been trained on a big dataset of the language are then fine-tuned to map input texts to provided categories.
The theory of what is happening here is that in order to accurately “predict the next word”, language models need to have developed internal structures that map to different flows and structures in the text. As such, if you cut off the final “predicting the next word” step, and replace it with a “what category” step, those internal structures can be usefully repurposed to this task.
As such, machine learning based text classifiers can be more flexible. They are picking up patterns like “this flavour of word is in proximity to this flavour of word” that would be difficult to manually code for. The downside is that they are a black box, and it is hard to understand what it has done to make a classification decision. They are also more resource intensive and slower to categorise large datasets — but still fundamentally possible to run on traditional hardware.
LLMs
The next wave is LLMs, which take the same basic concept and massively increase the data and the size of the model. Here, rather than replacing the “next word” step, the LLM is trained on a datasets that contain both instructions and the results of following those instructions. This makes zero-shot classification possible. Without retraining, a model can be given a text and a list of labels and it can assign the label.
This remains a (now massive) black box, but errors in category assignment can be improved by adjusting the instructions. The new downsides over smaller machine learning models is the much larger size of the model hugely increases the cost of self-hosting and creates dependencies on external companies providing models. If you use proprietary models (that are regularly updated and deprecated) this creates problems for reproducible processes.
Rulebox approach
The Rulebox approach combines aspects of both approaches. One of the things that LLMs are quite good at is writing code to solve stated problems. Here we’re doing a version of that: providing text and a category, and asking it to produce a set of regular expressions that should assign this category.
This has its unique set of pros and cons: you are still bound by the underlying problem of regular expressions that they are matching on text rather than the vibes of the text (which language models are better at). But you have massively reduced the labour time needed to create the huge set of rules, and once you have these they can be applied at speed on traditional hardware.
This is part of a focus on “low resource” use of LLMs – where we want to think about where we can get the most value out of new technology, in a way that avoids dependence or hugely increased capacity.
The process
We used an OpenAI-based process to assign labels to a set of 2,000 EDMs (1000 each for a training and validation dataset).
We then created a basic structure for holding regular expression rules using Pydantic for the underlying data structure of the collection of rules. For each rule, this either allows a list of regex expressions that are AND (all must match) or OR (one must match) — with the option of NOT rules that will negate a positive match.
Once we have the holder for a set of rules, and a dataset with a set of labels, we can start to calculate mismatches between what the rules say, and the result. Running this in a loop with steps that query an LLM helps refine the result.
The steps are:
- Calculate mismatches between ground truth labels, and assigned labels: finding both missing labels and incorrect labels.
- AI: For each missing label, create a new regex rule that would assign the correct label.
- AI: For each incorrect label, adjust and replace the regex rules that triggered this label.
- Repeat until no missing or incorrect labels.
PydanticAI is used to interface with the OpenAI API. This includes not just using pydantic to validate the returning data structure, but some extra validation rules that the resulting rules match the text that was being input. So for instance, if a rule is being generated to assign a label to a piece of text, if the generated rule fails to match the input text, this failure is passed back to trigger a retry.
The initial attempt at this got stuck in a loop creating rules that were too general, and trying to narrow them down. At this point, we cut the categories down to just a few we were really interested in, and after that performed better, expanded out to eight where it felt like keyword categories should perform reasonably well (or at least successfully generate rules). This ends up with 1,500 regular expressions to assign eight categories.
Applying the rules
Once we have the rules, we know they work for the training dataset, but how useful are they in general?
Using the validation dataset, we can see the following differences:
- Correct labels: 230
- Missing labels: 73
- Incorrect labels: 41
- Items where no labels were assigned: 808 / 1000 total items
Reviewing these, incorrect labels generally felt fair enough – these tended to be examples that contained obvious keywords related to the environment, but were part of longer lists where the labelling process did not judge it as one of the focuses of the text. The missing labels were more of a problem, where 33 of the missing labels were environmental ones. Expanding the training data should improve this, but there is always just going to be a long tail that’s missed.
Something else we experimented with at this stage was moving the process that applied the rules from Python to Rust (using an LLM to translate a basic version of the Python mechanics). This cut the time taken to categorising 13,000 EDMs from 2 minutes to 4 seconds. The benefit of this isn’t just being fast on this dataset, but that much more complicated rulesets would not be a big slowdown.
What have we learned
In general this is an approach worth investigating further as a bridge between several useful features: with it, we are able to translate an initial high intensity of LLM into a process that can be run fast on traditional hardware, and importantly is not a black box in terms of how it assigns labels.
It doesn’t completely carry over the benefits of LLMs:it is better for smaller, more precise categories. It really needs a good theory on why a keyword approach would be a good way of categorising something. It might be a good transitional approach for a few years while options stabilise around more open models with lower resource requirements.
Next steps
The next steps on this are to expand the training data a bit and start seeing if we can practically make use of the categories assigned, or if the accuracy causes problems.
Depending how this goes, we can revisit the initial experiment code and tidy it up into a more general classifying tool. This could tackle other classification problems we have that might be suitable, and we could make the tool more widely available. An advantage of this kind of approach (as our previous work around vector search) is it is the kind of project where “a technically-minded volunteer helped us to create a tool” might help organisations without creating significant new dependencies or new infrastructure requirements.
We also want to think about where hybrid approaches might be useful. For instance, in these datasets, most items are not labelled at all. A fast first pass that identifies potential items could then switch to an LLM approach to knock out false positives from the data. Similarly, once we have a smaller pool of environmentally-linked items, further subclassification using LLMs is much more viable.
Our general approach is to try and identify the things that LLMs can do uniquely well, and build them into overall processes that tame some of the things that worry us about AI in general. Here we are exploring how we have focused on the use of LLMs, resulting in new processes that are both fast and efficient. For more about our approach, read our AI framework.
Photo by Marc Sendra Martorell on Unsplash
- Calculate mismatches between ground truth labels, and assigned labels: finding both missing labels and incorrect labels.
-
To rebuild public trust in our political system we need better data, stronger checks, tighter rules and ultimately, systematic reform.
Over the last few months, 50 volunteers helped the TheyWorkForYou team go through the Register of Members’ Financial Interests (RMFI), line by line, for all 650 MPs. We were looking for specific bits of information, but also to more generally understand the state of the Register and how rules on transparency are working in practice.
- Read the report here
- Join us for the launch event at 1pm today
We have many ideas on how to improve that transparency, but the goal is not ‘just’ good documentation of office holders’ conflicts of interest: rather, the minimisation and elimination of those interests in the first place. To better align politicians’ behaviour with public expectations, there is no substitute for a stricter set of rules around MPs’ financial interests.
As such, we are making four categories of recommendations, stepping from incremental change to improve data collection, to systemic reform of the funding landscape.
- Better data collection to achieve more accurate interests information
- Stronger checks to make sure the interests information is reliable
- Tighter rules so there are fewer unacceptable interests in the first place
- Systematic reform to decrease the role of money in the political system.
As part of this project we have also added two new features to TheyWorkForYou:
- Election registers – adding more details and summaries to disclosures made after the last election.
- Highlighted interests – bringing together interests related to industries with low public support and governments of not free countries and offering MPs opportunity for additional context.
Over the next few months, we will release follow-on work from this project, including adding Registers of Interests for the devolved parliaments to TheyWorkForYou, releasing more information on APPGs, and a blog series on conflicts of interest declared in Parliament.
For now, do read the report. We’ll also be discussing our findings with Chris Cook of the Financial Times and Rose Whiffen from Transparency International today at 1pm: reserve your spot here.
-
In our new WhoFundsThem project we are making summaries of MPs registers of financial interests to add to TheyWorkForYou. We want to take these existing disclosures and add context to make them easier to understand. To do this, we are taking a hard look at how all the existing disclosure processes work (and when they don’t) to understand how we might best apply pressure for improvements.
One of our motivations here is that we think the rules about what MPs can and can’t do should be led by public expectations. To reflect that in our work, we’ve put together a literature review of the current picture of evidence around how MPs’ financial interests operate, and how these are perceived.
We’ve published this review online, but here are some quick thoughts I’ve taken away from this.
—
There’s been a big shift in the role of MPs from 40 years ago – in practice and in public perception, being an MP is a full time job. There is some nuance here in public perception: while some professions are more approved in general (doctors/nurses), generally as the pay or involvement goes up, the work is considered less favourably.
There’s too much focus on the problem of second jobs being a distraction for the MP, and not enough on the problem of privileged access to Parliament for those who can pay for it. We should be asking questions about when MPs are selling their access rather than expertise. This encourages paying more attention to written questions – where MPs have a privileged ability to get answers to questions (and there’s indirect evidence this has been happening as part of some MPs’ second employment).
We need to care where donations come from, rather than being too focused on what they were spent on. A general throughline in the discourse is catching when people are benefiting privately from their position (e.g. receiving gifts) – but there’s also the situation that private donors are supporting the public work of politicians (for instance, funding researchers in their offices). With a “follow the money” hat on, this should be seen as an investment in relationships with politicians that might pay off later rather than being purely public spirited.
We need to be aware that transparency in this area has been a hedge against more substantial reform (e.g. disclose bad things rather than stop doing bad things). This compromise position has usefulness for both sides. For those who want stricter rules, it encourages politicians to have one eye on public opinion through disclosure requirements, and generates a regular series of news stories helpful in future reform.
But for those opposed to stricter rules, transparency can be framed as approval – where the electorate is argued to have endorsed MPs’ choices. Conversations become about if the rules were followed rather than the underlying issues, and when the regime is only half-heartedly supported, non-disclosure can be common (meaning that scrutiny falls more on those correctly disclosing rather than those who do not).
In general, we see increasing the transparency and getting the most out of the information that is available as the tool we have been given to improve the situation. But we shouldn’t lose sight that transparency is a means, not an end in itself.
—
From this work, we’ve created a set of questions that make sure we draw out important aspects of the register. Next week our volunteers will start to answer these questions.
These questions cover all sections of the register. We’re asking volunteers to help us understand which industries are showing up in MPs’ registers, and whether they are declaring an interest in debates and questions when they’re supposed to be. We’ll compare the Register of Interests against Companies’ House with support from new data from Any One Thing, and we’ll get volunteers to give MPs’ registered interests and overall transparency score. The process will also include a right of reply, so MP’s can respond to the summaries we write.
We do this work because we think it is possible to make politics better from the outside. Through combining the effort of volunteers with the lever of technology, we can make a real difference in how things work.
If you’d like to support this project – please donate today.
-
The first register of All Party Parliamentary Groups since the general election has just been published, and 519 of the 553 groups have vanished, leaving just 34.
What is an APPG?
All Party Parliamentary Groups (APPGs) are self-selecting groups of MPs and Lords with an interest in a particular policy area. Most groups are supported by a secretariat, which is usually a charity, membership body or consultancy organisation.
The logic behind APPGs is to create legitimate avenues for experts and interested parties from outside Parliament to discuss policy with MPs and Lords – but unfortunately they can also be vehicles for corruption.
Our WhoFundsThem project is going to be taking a closer look at APPGs, to see which MPs are members (this information is currently not published) and a closer look at the organisations providing secretariat support. We have also updated our public APPGs spreadsheet with the new register.
So why have so many groups disappeared?
A change in rules last year meant that we saw a huge drop-off from the 800+ groups registered in March to around 450 in April, and then a steady increase to 553 by the end of May. The 28th August edition has just 34 registered groups.
Since the general election, we think are there are three factors that might be influencing the dramatic decline in registered groups:
- New officer rule – there’s a new rule that MPs are now only allowed to be an officer of a maximum of six groups.
- The reduced size of the opposition – the ‘all party’ nature of APPGs means that they must have at least one member of the official opposition as an officer. Before Parliament was dissolved for the election in May, the then Labour opposition had 206 MPs. Now, the Conservative opposition has 121 MPs. Conservative Lords are allowed to be officers of APPGs, but the APPG Chair must be an MP.
- Summer recess admin delay – in order to meet the deadline for this register, groups had to hold their new AGM to elect officers before summer recess began on 30 July. This gave them just a couple of weeks after the election, which was a hectic time, especially for the majority of MPs who were new to Parliament, and busy setting up their offices.
What next?
Given that we’ve just had one register, we can’t be sure which of these factors is having the biggest effect, but a second edition of the register should help us to understand the scale of the admin delay problem.
We expect a large number of groups will have used the summer to get established and recruit officers and members – but they will need to hold an AGM fairly soon after Parliament returns next week in order to make the new register, which should be published in about six weeks’ time.
We’ll be looking in detail at the work of these groups, and the people behind them, in our project WhoFundsThem. Please consider donating to help us do more of this work.
Photo by Erik Mclean on Unsplash
-
Tl;dr: Parliament has released new data, which we’ve made available in a simple format.
As part of the new release of the register of financial interests (which we blogged about yesterday) – Parliament has released CSVs of the new edition of the register. This isn’t just a better way of getting the data from each page individually, but contains much richer information than we’ve had previously.
Earlier this year, Parliament improved its data collection for MPs’ interests – meaning it collects much more structured data for different kinds of interests than the free text data that was released previously.
This is really good news – the work put in improving the data collection is so hard to do from the outside. Lots of effort has been made to clean up data in the past, but it was just fundamentally too broken. This is a big improvement on that – and means we can focus our efforts on where we can add the most value.
We know that Parliament is looking at creating data tools to sit on top of this – but in the meantime we’ve quickly made a single Excel file – and an analysis site to explore the data. We’ve also added our IDs from TheyWorkForYou and information on the MPs party. The great thing about Parliament making more data available is how that data can then be expanded by other datasets – for instance, the data now contains Companies House IDs, which could be joined to a range of datasets.
Please email if there are tweaks that would make the spreadsheet more useful to you!
Some example queries that are possible with this (give the site a minute to load):
Whenever Parliament ups its game, we need to think about what we’re going to do to build on top of that. As part of our WhoFundsThem project, we’re working to create simple summaries of declarations of interests. In general, the register is full of data but lacking in context. What do these organisations who have donated do? What’s the top-line figure on outside income? Is this affecting how MPs behave in parliament?
These are the questions we want to answer through WhoFundsThem. If you also want to know the answer, you can donate to support our work.
-
TICTeC, our Impacts of Civic Technology Conference, will be returning for its 7th edition on 12th and 13th June 2024, in London and online. We’re delighted to announce that the call for proposals and registration for TICTeC 2024 are now open.
TICTeC is all about sharing research, knowledge and experiences to examine and improve the impacts of civic technology, in order to strengthen democracy, public participation, transparency, and accountability across the world.
Call for Proposals: open until 22nd March 2024
Core themes
After twenty years of mySociety, and approaching ten years of TICTeC – we want to think about what is needed now to match the big challenges of the next twenty years.
As well as examining the impact that civic technology is having upon societies around the world, the big question we want to answer through TICTeC is:
What is needed to make civic tech on a global scale more successful and impactful, to tackle global problems around democracy and climate change?
Through TICTeC 2024 and 2025, and our new Communities of Practice – we are going to break down this question, and work through for ourselves, and with our partners, what is needed to deliver on the radical goals of the civic technology movement.
This breaks down into two sub questions that we want to explore. What is the role of civic tech in:
- safeguarding and advancing democracy/transparency where it is under threat?
- enabling the effective and democratic change needed to meet the challenge of climate change?
For this year’s TICTeC we encourage proposals that contribute to discussion around these two thematic questions, as well as to the overarching conference theme. Potential topic areas may include:
- Access to Information/Freedom of Information
- Monitoring parliaments/legislatures
- Climate change/climate action
- Tools for citizen participation
- AI and Democracy
- Civic tech as part of civil society
- Crowdsourcing and volunteers
- Impacts of big tech/tech giants
- Fact checking
- Technical infrastructure/cybersecurity
You can propose 20 minute presentations and ideas for longer workshops.
We encourage presentation submissions to focus on the specific impacts of technologies, rather than showcase new tools that are as yet untested. A tool doesn’t have to have mass usage to be worth talking about – we’re also interested in qualitative stories on the impacts of technology, their impacts on official processes, and how users have used platforms to campaign for change. We’re also interested in stories about obstacles and barriers to having impact.
Workshop proposals should be relevant to the conference themes. Technology does not have to be new, and we welcome retrospectives on long running projects.
The deadline for applications is the 22nd March 2024. Those selected for inclusion in the conference programme will be notified no later than 5th April 2024.
Presenters will be required to register for the conference by 19th April in order to confirm their slot (the registration fee will be waived for individuals presenting).
Submit your proposals via this application form by 22nd March 2024 at the latest.
Register now
Registration for TICTeC 2024 is now open and is essential in order to attend. TICTeC has sold out in previous years – so make sure you get tickets early. Early bird tickets provide a significant discount, so it’s well worth registering before early bird ticket sales end on 20th April 2024.
Attending TICTeC 2024 in-person will allow attendees access to all conference sessions, including main plenary sessions, presentation/Q&A sessions, workshops, networking sessions, lunches and drinks reception. Attending online will allow remote attendees access to all main plenary sessions and some breakout presentation/Q&A sessions.
The TICTeC 2024 Eventbrite page contains further information about the conference, as well as FAQs, but do let us know if you have any questions by emailing tictec@mysociety.org.
In the following months, we will be publishing full details of proceedings as they are announced over on the TICTeC website. If you’d like to hear of TICTeC 2024 updates first, please sign up for email updates.
And in the meantime, if you’d like to see what TICTeC is all about, you can browse all the resources from previous TICTeC events over on the TICTeC Knowledge Hub.
We look forward to welcoming you to TICTeC 2024!