1. Our transparency rules need to adapt to the rise of AI

    The government is making a significant investment into AI in public services, and systems are changing apace.

    AI is increasingly being deployed in every department of government, both national and local, and often through systems procured from external contractors.

    In a recent article for Public Technology, mySociety’s Chief Executive Louise Crow flags that we urgently need to update our transparency and accountability mechanisms to keep pace with the automation of state decision-making.

    This rapid adoption needs scrutiny: not only because significant amounts of money are being spent; but also because we’re looking at a new generation of digital systems in which the rules of operation are, by their very nature, opaque.

    To see Louise’s thoughts on what needs to change, and why, as this new technological era unfolds, read the full piece here.

    If you find it of interest, you may also wish to watch this recent event at the Institute for Government, The Freedom of Information Act at 25, where Louise was one of six speakers reflecting on the future of transparency in the UK.

     

    Image: Alex Socra

  2. AI and government: keeping the human in democracy

    mySociety was founded on one seismic technological change: the arrival of the internet, bringing radical new possibilities to the ways in which we engage with democracy.

    Now we’re seeing a second upheaval, just as potentially explosive: the wide adoption of generative AI and machine learning tools — particular kinds of artificial intelligence — not least by the UK government, who have made a commitment to see AI “mainlined into the veins of the nation”. 

    From the visible and novel, like ‘AI bot’ MPs; to the hidden and less-interrogated, like the algorithms that drive decision-making around benefits; to the new capabilities around working with large text datasets that we ourselves are experimenting with at mySociety: artificial intelligence is changing the way democracy works. 

    We’ve been thinking about AI for some time, as have our colleagues around the world — TICTeC 2025 had a strong strand of pro-democracy organisations showcasing how they are using new technologies to hold authorities to account and support public engagement; alongside developers showing the tools that aim to make the government more responsive. 

    AI is coming to democracy, whether we like it or not. In many places, it’s already here.

    But there are implementations in which it can be highly beneficial to us all; and ways in which it can present a clear and present danger to democracy. 

    It benefits everyone if there is a high level of understanding of both the challenges and the opportunities of AI in government. Democratic decision makers need to understand digital tech in order to legislate effectively around it, to develop and procure it effectively.

    This is not just so that they can deliver services more efficiently, but also to ensure that they retain the legitimacy of democratic government by using tech and AI in a way that ensures transparency and accountability, preserves public trust and allows the public to understand and participate in the decisions that affect their lives. 

    Reflections for our time

    Over the next few months, we’ll be sharing our own thoughts and experience — alongside invited guest writers who are thinking about how AI interacts with democratic processes and institutions, and how to make that better — in a series of short pieces.

    These will examine the different ways that AI is affecting the things we care about here at mySociety: 

    • Transparent, informed, responsive democratic institutions 
    • Politicians and public servants who work for the public interest 
    • Democratic equality for citizens: equal access to information, representation and voice
    • A flourishing civil society ecosystem
    • The effective and principled use of digital technologies
    • Action from politicians to match the evidence of the climate crisis and the level of public concern 
    • Better communication between politicians and the public, creating space for climate action. 

    Stay informed

    If you’d like to get updates in your inbox, make sure you’ve checked ‘artificial intelligence’ as an interest on our newsletter sign-up form (if you already receive our newsletters, don’t worry – so long as you use the same email address, this will just update your preferences. Just make sure you’ve ticked everything you’re interested in).

    By also completing the ‘how do you identify yourself’ section, you’ll help us send you the most relevant material: that means guidance if you work in government or build tech; data and our analysis if you’re a researcher; tools for holding authorities to account if you are an individual or work in civil society, and so on.

    Image: Adi Goldstein

  3. How FOI feeds into public conversation

    Often, responses published on our Freedom of Information site WhatDoTheyKnow result in newspaper stories, or feed into campaigns or research.

    When this happens with one of your own requests, you can add a link to the page. These then appear in the side column, like this:

    FOI in Action https://news.stv.tv/politics/swinney-shared-concern-over-golf-course-vandalism-in-meeting-with-trumps-son https://www.thescottishsun.co.uk/news/14626492/john-swinney-secret-meeting-eric-trump-notes/

    It’s a great way for other users of the site to see the direct results that come from the simple act of making an FOI request — and now we’ve also added an ‘FOI in Action’ page, where you can see all of them in one place.

    The banner from our 'FOI In Action' page, which explains the types of request that include citations: Journalism, Research, Campaigning and advocacy; and 'Other'

    Here are five stories that have caught our eye from that page:

    We’re not far off listing 3,000 citations on WhatDoTheyKnow — and these are just the ones users have added. If your request resulted in a piece of journalism, informed a campaign or fed into research, do add it in. As well as helping to show others what FOI can do, it provides a significant link back to the external site, helping bring it more readers.

    Image: Peter Lawrence

  4. How access to information can help us understand AI decision making

    mySociety podcast
    mySociety
    How access to information can help us understand AI decision making
    Loading
    /

    AI and automated decision-making technologies are increasingly being used in government, and due to their opaque nature, it’s vital that we bring more transparency to their workings. In this event, three researchers and civil society actors talk about how they have used Freedom of Information to do just that.

    You’ll hear from Morgan Currie from the University of Edinburgh; Gabriel Geiger of Lighthouse Reports, and Jake Hurfurt from Big Brother Watch. Learn what concerns them about this new age of automated decision-making; the practical tips and techniques they’ve used to bring hidden algorithms to light; and what needs to change in our laws as a matter of urgency.

    More information


    Transcript

    Louise Crow 0:03
    Hello, everyone, welcome. I’m Louise Crow, Chief Executive mySociety.

    Louise Crow 0:08
    Thank you for joining us for this one hour session on how Access to Information can help us understand AI decision making in government. (more…)

  5. How access to information can help us understand AI decision making

    If you were one of the 100+ people who joined us for today’s webinar, you’ll already know it was hugely informative and timely.

    We packed three fascinating speakers into the course of one hour-long session on using FOI to understand AI-based decision making by public authorities. Each brought so many insights that, even if you were there, you may wish to watch it all over again.

    Fortunately, you can! We’ve uploaded the video to YouTube, and you can also access Morgan’s slides on Google Slides, here and Jake’s as a PDF, here (Jake actually wasn’t able to display his slides, so this gives you the chance to view them alongside his presentation, should you wish).

    Morgan Currie of the University of Edinburgh kicked things off with a look at her research ‘Algorithmic Accountability in the UK’, and especially how opaque the Department of Work and Pensions (DWP)’s use of automation for fraud detection has been, over the years.

    Morgan explains the techniques used to gain more scrutiny of these decision-making and risk assessment processes, with much of the research based on analysing FOI requests made by others on WhatDoTheyKnow, which of course are public for everyone to see.

    Secondly, in a pre-recorded session, Gabriel Geiger from Lighthouse Reports gave an overview of their Suspicion Machines Investigation which delves into the use of AI across different European welfare systems. Shockingly, but sadly not surprisingly, the investigation found code that was predicting which recipients of benefits are most likely to be committing fraud, with an inbuilt bias against minoritised people, women and parents — multiplied for anyone who falls into more than one of those categories.

    Gabriel also outlined a useful three-tiered approach to this type of investigation, which others will be able to learn from when instigating similar research projects.

    Our third speaker was Jake Hurfurt of Big Brother Watch, who spoke of the decreasing transparency of our public bodies when it comes to AI-based systems, and the root causes of it: a lack of technical expertise among smaller authorities and the contracting of technology from private suppliers. Jake was in equal parts eloquent and fear-inducing about what this means for individuals who want to understand the decisions that have been made about them, and hold authorities accountable — but he also has concrete suggestions as to how the law must be reformed to reflect the times we live in.

    The session rounded off with a brief opportunity to ask questions, which you can also watch in the video.

    Presented in collaboration with our fellow transparency organisations AccessInfo Europe and Frag Den Staat, this session was an output of the ATI Community of Practice.

    Image: Michael Dziedzic

  6. An international community working to ensure AI is deployed for the good of all

    TICTeC, our Impacts of Civic Technology conference, has been running since 2015. Over the years, we’ve seen shifts within both tech and democracy that have been reflected as priority topics: from the foundational (and evergreen) question of ‘how can you assess the value of civic technology if you don’t measure its impacts?’, to the rise of authoritarian ‘strong man’ leaders across the world, to a surge of enthusiasm for what blockchain can do around civic tech.

    As each of these topics rise to the top of the civic tech community consciousness, TICTeC has provided a natural place to air questions, concerns and solutions.

    This year, of course, the foundation-shaking issue is AI. Compared to 2024, when the technology was just beginning to be applied in our field, there’s been a maturing of the discussion, and much more concrete engagement with both the opportunities and the challenges that AI brings around government, truth, trust and delivery.

    Our job is to make sure we steer towards the good — or, to phrase it in alignment with mySociety’s own aims, to examine how to engage critically and transparently with AI to create a fair and safe society.

    AI across TICTeC 2025

    The theme of AI was woven through the conference: where it wasn’t the primary topic itself, it coloured our thinking and had relevance everywhere. 

    Sessions dealing primarily with AI could be divided into three broad angles:

    • Since AI is already making inroads into governance systems, how can we ensure it is used well? 
    • How have AI’s capabilities been harnessed to make civic tech tools, improve functionality or increase efficiency, and how’s that going? 
    • Can tools counter the problems that AI presents around truth and trust?

    Let’s look at each of these in turn.

    AI and democratic governance

    Both of our keynote speakers were keen to point out the need for oversight and citizen participation as AI is rapidly adopted across government systems. 

    Marietje Schaake, whose presentation you can rewatch here, warned of the dangers of private tech firms holding more power than our constitutional democracies, thanks to the limitless profits to be made from this new technology; while Fernanda Campagnucci (presentation here) advocated for citizens to be allowed into the decision-making processes not just around governance itself, but in the making of the tools that facilitate it.

    We also heard from the people at the frontline of governance. An instructive session from Westminster Foundation For Democracy and the Hellenic Parliament (not recorded) quizzed participants on how comfortable they would be in easing the administrative burden of parliaments by allowing AI to help categorise, filter and even answer letters from citizens. Would our opinion change if we knew, for example, that there was a backlog of 40,000 messages to representatives?

    In a session deeply rooted in the realities of running a local authority during a period of tech acceleration, Manchester City Council explained that in a city where 450,000 people don’t even use the internet, it is crucial to ensure AI is being used ethically and to communicate how it affects citizens’ lives: “Whether or not you choose to interact with AI there’s no way of opting out – AI based decision making is happening around you.”

    Three speakers from the Civic Tech Field Guide laid out the case for audits on how AI is being used in your own community, showing how anyone can do it, and Felix Sieker from Bertelsmann Stiftung made a strong argument for public AI, with proper accountability and democratic oversight, rather than the power being concentrated in a handful of private firms — something that is already being developed in several different forms, including by Mozilla.

    MIT GOV/LAB ran a workshop (not recorded) in which we could chat with a simulation of a person from the future about the effects of a climate policy, then decide whether or not we would implement that policy once we had a human account of its results. This is part of ongoing research into helping to break deadlocks in policy decision-making.

    How AI is already being used in civic tech

    Both Code for Pakistan and Tainan Sprout showed how they’ve deployed AI to allow citizens to query dense policy documentation and get answers that are easy to understand

    Demos talked about the work they’ve been doing around a new AI-powered digital deliberation process called Waves, hoping to ‘do democracy differently’ in our current crisis of mistrust.

    Dealing with AI and misinformation

    Camino Rojo from Google Spain showcased new tools, some of which are shortly to be rolled out, to help counteract misinformation. In particular, these allow users to check whether or not media displayed in search results was artificially generated. At the moment, the onus lies with the image generator to provide this information. Strict guidelines apply, in particular, to those advertising around sensitive areas such as elections.

    AI and mySociety

    In the final session of the conference, we presented the various ways that we’ve been exploring how AI can support mySociety’s work. You can rewatch this session in full here.

    We have been guided by our own AI framework, in which we set out the six ethical principles by which we adhere when adopting this (or any) new technology. In essence, these can be boiled down to the single sentence: “We should use AI solutions when they are the best way of solving significant problems, are compatible with our wider ethical principles and reputation, and can be sustainably integrated into our work.” 

    In other words, we are not working backwards from the existence of AI to see what we could do with it, but approaching from the question of what we want to achieve, and then examining whether AI would aid us to do so more efficiently.

    In this session you can discover how we’ve used AI to more effectively deal with problems in bulk, and make information easier for everyone to access across our work in Transparency; hear thoughts on how, for our work in Democracy, and especially the recent WhoFundsThem project, we’ve found that a human approach is sometimes needed — but that there are some tasks that AI can make easier here.

    For the future we’re thinking about AI as it might apply to WriteToThem not to burden representatives with more mail, but perhaps communications of a higher quality.

    Overall, we’re keeping a wary eye open for how AI will almost certainly be (and already is?) muddying the ability to trust the provenance of information — especially given that mySociety is essentially a ‘resupplier’ of data from public authorities and Parliament.

    In a LinkedIn post, our Democracy Lead Alex got at the core of the challenges ahead of us all in the civic tech field, when he said: “Different kinds of technologies make different kinds of futures easier – and what we’re trying to do with pro-democratic tech is to make democratic futures easier. But the opposite is obviously [possible], and AI has arrived at the right time to merge aesthetically and ideologically with authoritarian regimes.

    “A core to the spirit of civic tech is persuasion by demonstration – and to me TICTeC is a wonderful distillation of that spirit of both imagining better things, and doing the work to show what’s possible.” 

    And on that thought, we will roll up our sleeves and work towards the version of the future that is better for everyone.

    TICTeC 2025 was more than a conference — it was a laboratory of hope. Thank you for curating such a thoughtful, globally inclusive space… let’s keep building more bridges across regions and generations — our challenges are shared, and so are the solutions.

    We’re leading the conversation on AI and democratic decision making —

    and we need your help.

    mySociety was founded more than two decades ago to help democratic governance deliver on the raised expectations of the internet era.

    We are in a period in which the relationship between tech and government is more entangled and fraught than ever. We’re stepping up, but we can only do so with your support. Please do consider making a donation.

  7. TICTeC 2025: “Energy, open exchanges, and motivation to keep pushing forward tech for democracy”

    TICTeC is wrapped up for another year. The roller banners are stowed away, the lanyards saved for next year, and now we’re back home from Belgium with memories, insights and enough hope to keep us going ’til next time. 

    It’s always energising to come together with the global civic tech community and share everything we’ve learned. We had attendees from 34 countries, bringing together their experiences — and judging by the comments we’re seeing, we’re not the only ones to have found it both enjoyable and valuable.

    The two days were “fabulous and thought provoking”, allowing for “the exchange of experiences and coordinated actions”, and delegates said they returned “inspired, with new insights on civic tech trends and promising collaboration ideas”.

    Perhaps Hendrik Nahr from make.org, summed the whole experience up best when he said, “It felt like a family gathering of the civic tech community from Europe and beyond. I’m grateful for the energy, the open exchanges, and the motivation to keep pushing forward tech for democracy.”

    We are grateful too: TICTeC is not just about mySociety creating an open space for such discourse; it also depends on the people who participate and the insights they so generously articulate.

    What we talked about

    It’s hard to provide a full summary of such a packed event, but fortunately we’ll soon be able to share videos of the majority of the presentations, along with slides and photographs, so you’ll be able to choose what you’d like to see. 

    The overall theme of the conference was tech to defend and advance democracy, and within that there were strong strands around tech to tackle the climate emergency; citizen participation and deliberation; transparency and access to information… and across everything we heard of the seismic changes to society, to tech and to democracy — both already seen, and expected soon — by the emergence of AI. 

    To pull out a few high points from so many thought-provoking moments:

    Marietje Schaake, delivering her keynote remotely because of last minute train strike issues, still managed to enthrall the auditorium and ignite our two days of conversation with an incisive overview of how big tech is overtaking democratic governance globally, with oversight lagging dangerously behind. We posted a summary on Bluesky in real time, if you can’t wait for the video.

    Fernanda Campagnucci‘s day two keynote (summarised here) sliced up the different approaches government can take to citizen participation, from citizens feeding into decision-making processes, to citizens being invited to co-create both the data and the governance systems, featuring a nice story about an elderly lady who grumbled that everyone was talking about APIs (a way for software systems to communicate with one another) at a town meeting but she didn’t know what it meant. Once someone had explained to her, she turned up at every subsequent meeting to request APIs of every department’s output.

    Colin Megill used the opportunity provided by TICTeC to launch Pol.is 2.0 to a highly relevant audience. This is a paid version of the open source decision-making platform — the basis of Twitter’s “Community Notes” functionality — which contains a ‘superset’ of new features. Its enhanced LLM capabilities allow it to break sprawling conversations into any number of subtopics, making them easier to moderate and removing blocks to overall consensus that can be created by small sticking points.

    Panels brought people together to talk about aspects of parliamentary monitoring and access to information from around the world – discussions we will be continuing through our communities of practice work. 

    There was a useful session on the importance of, and methods for, measuring impact — after all, TICTEC’s foundational purpose — from OpenUp South Africa, Hungary’s Átlátszónet Foundation and SPOON Netherlands.

    We wrapped up the conference with an examination of how mySociety is navigating AI in recent and near future work, and an open forum about how TICTeC can evolve and continue to be useful to the global civic tech community. 

    We presented how we’re thinking about, utilising and navigating both the positives and potential dangers of AI. Such considerations are also preoccupations for others in our field: several organisations are experimenting with AI to achieve or work more efficiently toward their pro-democracy aims; others are foreseeing problems that AI may bring, from amplifying misinformation to algorithm-based decisions that affect individuals’ lives. 

    There wasn’t an organisation at TICTeC that isn’t thinking about AI in one way or another, as evidenced in diverse sessions across the entire conference. There’s a sense that the conversation has matured from last year, moving on from hype to clear engagement on practical uses, and for scrutiny of both model creators and government uses. We’ll write more about this in a separate post.

    And also, watch this space for videos and photos from TICTeC 2025, which we’ll share as soon as they’re ready. That should keep us all going until next year.

  8. Episode 1: August 2024

    mySociety podcast
    mySociety
    Episode 1: August 2024
    Loading
    /

    It’s our first ever podcast at mySociety! Heeey how about that?

    Myf, our Communications Manager, runs you through all the stuff we’ve been doing at mySociety over the last month. It’s amazing what we manage to fit into just 30 days: you’ll hear about a meeting of Freedom of Information practitioners from around Europe; our new (and evolving) policy on the use of AI; a chat with someone who used the Climate Scorecards tool to springboard into further climate action… oh, and there’s just the small matter of the General Election here in the UK, which involved some crafty tweaking behind the scenes of our sites TheyWorkForYou and WriteToThem.

    Links

    Music: Chafftop by Blue Dot Sessions.

    Transcript

    0:00

    Well, hello and welcome to mySociety’s monthly round-up.

    My name is Myf Nixon, Communications Manager at mySociety.

    0:11

    This is part of an experiment that we’re currently running where we’re trying to talk about our work in new formats, to see if that makes it easier for you to keep up with our news. (more…)

  9. mySociety’s approach to AI

    To react appropriately to the emergence of AI, we need to understand it. We’re making our internal AI Framework public as a way of being transparent about the kind of questions we’re asking ourselves about using AI in mySociety’s tools and services. 

    At our recent TICTeC conference, there were several great examples of how generative AI approaches can be applied to civic tech problems.  But regardless of whether civic tech projects use AI approaches directly, it’s increasingly part of the tools we use,  and the context our services exist in is being changed by it. 

    A key way mySociety works is by applying relatively mature technology (like sending emails) in interesting ways to societal problems (reporting problems to the right level of government; transforming Parliamentary publishing; building a massive archive of Freedom of Information requests, etc). This informs how we adapt and advance our technical approach – we want to have clear eyes on the problems we want to solve rather than the tools we want to use.

    In this respect, generative AI is something new, but also something familiar. It’s a tool: it’s good at some things, not good at other things — and, as with other transformative tech we’ve lived through, we need to understand it and develop new skills to understand how to correctly apply it to the problems we’re trying to solve. 

    We currently have some funding from the Patrick J. McGovern Foundation where we’re exploring how new and old approaches can be applied to specific problems in our long running services. Across our different streams of work, we’ve been doing experiments and making practical use of generative AI tools, working with others to understand the potential, and thinking about the implications of integrating a new kind of technology into our work. 

    Our basic answer to “when should we use AI?” is straightforward. We should use AI solutions when they are the best way of solving problems, are compatible with our wider ethical principles and reputation, and can be sustainably integrated into our work.

    Breaking this down further led us to questions in six different domains:

    • Practical – does it solve a real problem for us or our users?
    • Societal – does it plausibly result in the kind of social change we want, and have we mitigated change we don’t want?
    • Legal/ethical – does our use of the tools match up to our wider standards and obligations?
    • Reputational – does using this harm how others view us or our services?
    • Infrastructural – have we properly considered the costs and benefits over time?
    • Environmental – have we specifically accounted for environmental costs?

    You can read the full document to see how we break this down further; but this is consciously a discussion starter rather than a checklist. 

    Publishing this framework is similarly meant to be a start to a discussion — and an anchor around open discussion of what we’ve been learning from our internal experiments. 

    We want to write a bit more in the open about the experiments we’ve been doing, where we see potential, where we see concerns. But this is all just part of the question at the root of our work: how can we use technology as a lever to help people to take part in and change society.

    Image: Eric Krull

  10. Training an AI to generate FixMyStreet reports

    Artificial intelligence and machine learning seem to be everywhere at the moment – every day there’s a new story about the latest smart assistant, self-driving car or the impending take over of the world by robots. With FixMyStreet having recently reached one million reports, I started wondering what kind of fun things could be done with that dataset.

    Inspired by a recent post that generated UK place names using a neural network, I thought I’d dip my toes in the deep learning sea and apply the same technique to FixMyStreet reports. Predictably enough the results are a bit weird.

    I took the titles from all the public reports on fixmystreet.com as the training data, and left the training process to run overnight. The number crunching was pretty slow and the calculations had barely reached 5% in the morning. I suspect the training set was a bit too large, at over 1M entries, but end result still gives enough to work with.

    The training process produces checkpoints along the way, which you can use to see how the learning is progressing. After 1000 iterations the model was starting to be aware that it should use words, but didn’t really know how to spell them:

    Mertricolbes
    Ice does thrown campryings
    Sunky riking proper, badger verwappefing cars off uping is!
    Finst Knmp
    Lyghimes Jn fence
    Moadle bridge is one descemjop
    

    After 15000 iterations it’s starting to get the hang of real words, though still struggling to form coherent sentences.

    Untaxed cacistance.
    Broken Surface in ARRUIGARDUR. Widdy movering
    Cracked already nail some house height avenue.
    Light not worky
    I large pot hole
    Dumped shood road nod at street.
    Grim
    Dog man
    Ongorently obstructing sofas. This birgs.
    Serious Dirches
    

    After 68000 iterations there seems to be enough confusion in the training data that things start to go south again with the default parameters:

    Urgely councille at jnc swept arobley men.
    They whention to public bend to street? For traffic light not working
    

    Tweaking the ‘temperature’ of the sampling process produces increasingly sensible results:

    Large crumbling on pavement
    Potholes all overgrown for deep pothole
    Very van causing the road
    Very deep potholes on pavement
    Weeds on the pavement
    Several potholes in the road
    Rubbish Dumped on the road markings
    Potholes on three away surface blocking my peride garden of the pavement
    Potholes and rubbish bags on pavement
    Poor road sign damaged
    Poor street lights not working
    Dog mess in can on road bollard on pavement
    A large potholes and street light post in middle of road
    

    As well as plenty of variations on the most popular titles:

    Pot hole
    Pot hole on pavement
    Pot holes and pavement around
    Pot holes needings to path
    Pothole
    Pothole dark
    Pothole in road
    Pothole/Damaged to to weeks
    Potholes
    Potholes all overgrown for deep pothole
    Potholes in Cavation Close
    Potholes in lamp post Out
    Potholes in right stop lines sign
    Potholes on Knothendabout
    Street Light
    Street Lighting
    Street light
    Street light fence the entranch to Parver close
    Street light not working
    Street light not working develter
    Street light out opposite 82/00 Tood
    Street lights
    Street lights not working in manham wall post
    Street lights on path
    Street lights out
    

    It also seems to do quite well at making up road names that don’t exist in any of the original reports (or in reality):

    Street Light Out - 605 Ridington Road
    Signs left on qualing Road, Leave SE2234
    4 Phiphest Park Road Hasnyleys Rd
    Apton flytipping on Willour Lane
    The road U6!
    

    Here are a few of my favourites for their sheer absurdity:

    Huge pothole signs
    Lack of rubbish
    Wheelie car
    Keep Potholes
    Mattress left on cars
    Ant flat in the middle of road
    Flytipping goon!
    Pothole on the trees
    Abandoned rubbish in lane approaching badger toward Way ockgatton trees
    Overgrown bush Is broken - life of the road.
    Poo car
    Road missing
    Missing dog fouling - under traffic lights
    

    Aside from perhaps generating realistic-looking reports for demo/development sites I don’t know if this has any practical application for FixMyStreet, but it was fun to see what kind of thing is possible with not much work.


    Your donations keep our sites running.
    Donate now


    Image: Scott Lynch (CC by/2.0)