1. Climate monthnotes: December 2021

    Another month, another chance to share progress from the Climate team. And this time, you get to hear it from a different person too – Hello! I’m Zarino, one of mySociety’s designers, and Product Lead for the Climate programme.

    Over the last month, we’ve moved the programme on in three main areas: Adding some much-anticipated features to our headline product, the Climate Action Plans Explorer; continuing full steam ahead on development of Climate Emergency UK’s ‘Council Climate Plan Scorecards’ site, and setting up a research commissioning process that will kick in early next year.

    New features on CAPE

    Just barely missing the cut for Siôn’s mid-November monthnotes, we flipped the switch on another incremental improvement to CAPE, our database of council climate action plans:

    CAPE showing climate declarations and promises for a council

    CAPE now shows you whether a council has declared a climate emergency, and whether they’ve set themselves any public targets on becoming carbon neutral by a certain date. We are incredibly grateful to our partners Climate Emergency UK for helping us gather this data. Read my earlier blog post to find out more about how we achieved it.

    As well as displaying more data about each council, a core aim of the CAPE site is enabling more valuable comparisons with—and explorations of—the plans of similar councils. Previously, we’d done this by allowing you to browse councils of a particular type (London Boroughs, say, or County Councils), and by showing a list of “nearby” councils on each council’s page.

    Old CAPE page showing nearby councils

    However, we’re now excited to announce the launch of a whole new dimension of council comparisons on the site, thanks to some amazing work by our Research Associate Alex. To try them out, visit your council’s page on CAPE, and scroll down:

    New CAPE page showing similar councils

    These five tabs at the bottom of a council’s page hide a whole load of complexity—much of which I can barely explain myself—but the upshot is that visitors to CAPE will now be able to see much more useful, and accurate, suggestions of similar councils whose plans they might want to check out. Similar councils, after all, may be facing similar challenges, and may be able to share similar best practices. Sharing these best practices is what CAPE is all about.

    We’ll blog more about how we prepared these comparisons, in the new year.

    Council Climate Plan Scorecards

    As previously noted, we’re working with Climate Emergency UK to display the results of their analysis of council climate action plans, in early 2022. These “scorecards”, produced by trained volunteers marking councils’ published climate action plans and documents, will help open up the rich content of council’s plans, as well as highlighting best practice in nine key areas of a good climate emergency response.

    As part of the marking process, every council has been given a ‘Right of Reply’, to help Climate Emergency UK make sure the scorecards are as accurate as possible. We’re happy to share that they’ve received over 150 of these replies, representing over 50% of councils with a published climate action plan.

    With those council replies received, this month Climate Emergency UK’s experts were able to complete a second round of marking, producing the final scores.

    Meanwhile, Lucas, Struan, and I have been working away on the website interface that will make this huge wealth of data easily accessible and understandable – we look forward to sharing more about this in January’s monthnotes.

    Research commissioning

    Finally, as Alex recently blogged, we’ve been setting up a research commissioning process for mySociety – primarily to handle all the research we’d like to do in the Climate programme next year. Our main topics for exploration aren’t yet finalised, but we’re currently very interested in the following three areas:

    1. Public understanding of local authorities and climate
    2. Public pressure and local authorities
    3. How local authorities make decisions around climate

    Watch this space for more details about these research opportunities, and how to get involved.

  2. New on CAPE: headline pledges from councils

    The Climate Action Plans Explorer (CAPE) is gathering together every Climate Action Plan from every UK council that’s published one. We’re actively adding more functionality on an ongoing basis; most recently, we’ve extracted the ‘headline pledges’ from each plan, like this:

    screenshot showing Glasgow’s climate pledge

    Pledges like this give an idea of the council’s overarching priorities, but often have not been presented in isolation before, even by the councils themselves.

    Why we did this

    A core aim of our Climate programme is to improve the information ecosystem around local responses to the climate emergency:

    “We’re improving the information ecosystem to allow local and national campaigns, policymakers and other stakeholders to undertake better scrutiny and analysis of local climate action, and develop evidence-based policies and solutions.”

    We’ve already written about how we’re working with Climate Emergency UK to collect and score Climate Action Plans for every local authority in the UK. Providing people with an easy way into their local authority’s action plan will give them an unprecedented opportunity to gauge their council’s level of ambition in facing the climate emergency, and how they’re planning to turn those ambitions into actions.

    But plans can be complex, and time-consuming to read. Another, faster way people can understand their council’s level of ambition is by finding any targets that it might have set itself for decarbonising either the entire area, or just the council’s own operations, by a particular date.

    We call these ‘pledges’, and they’re typically not all that easy to find – they can be buried in council meeting minutes, or slipped somewhere into an unassuming page on the council’s website or action plan.

    Knowing what date your council is working towards, and what they believe they can achieve by that date, fundamentally sets the scene when it comes to understanding and contributing to the council’s climate actions.

    That’s why we decided to collect these pledges and share them on CAPE. Here’s where you’ll find them on each council’s page, setting the scene before you dig into the full action plan:

    Screenshot of Glasgow’s page on the Climate Action Plan explorer

    How we did this

    Collecting these pledges from scratch would be a mammoth task. Luckily, we were able to build on two partial datasets that gave us a headstart.

    An important thing to note is that we wanted to collect not only the scope (that is to say, whether the plan covers council operations only, or the whole area) and target date, but also the exact wording of the pledge, and the source where it was found.

    We found that local authorities often use terms like ‘carbon neutral’ and ‘net zero’ interchangeably, and the scope of pledges can sometimes be ambiguous. The most objective approach, therefore, was to present the entire pledge, as it was originally worded, and leave it up to the viewer to interpret the council’s intent. Collecting and exposing the source of the pledge would allow them to dig deeper and view the pledge in context, if they wanted to.

    Our partners, Climate Emergency UK, had already been collecting climate target dates as part of their ongoing monitoring of council responses to the climate and biodiversity emergencies. But since the target dates were just one small part of a much wider database, they hadn’t collected the direct quotes that we wanted to present.

    Still, in the Climate Emergency UK dataset, we effectively had a wide but shallow starting point, covering most councils in the UK, from which we could then proceed to fill in the detail by revisiting the sites, scanning them for anything that looked like a pledge, and pasting them into our database.

    We were also incredibly grateful to receive a smaller, but much more detailed, dataset of climate commitments from the National Audit Office, which covered 70% of the principal local authorities in England, some of which had made no commitments. They themselves had manually gathered these commitments from public sources—council’s minutes, websites, and action plans—over April to June of 2021.

    Combined with the Climate Emergency UK dataset, this data from the National Audit Office got us 75% of the way towards a full dataset of climate pledges from every council in the UK.

    With the help of Climate Emergency UK volunteers, we filled in the gaps on this combined dataset, collecting direct quotes for both council-only and whole area climate pledges, for 341 of the 408 councils in the UK.

    For the remaining 67 councils, we were unable to find a public climate pledge, or at least one with a concrete target date – but we’re hopeful that we might yet find this information, and the CAPE website includes a link on these councils’ pages through which visitors (or maybe councillors or council officers!) can contribute the data, if they’ve found a source elsewhere.

    Screenshot of the council list page

    The data was collected via an online spreadsheet, making it fairly easy to import into CAPE, as part of the website’s existing data processing pipeline. This feeds the pledges through to both the individual council pages, and also the all councils page, where you can now filter the list to show only councils with a target in a given five-year period, or no target at all.

    We will soon also be exposing these pledges via the CAPE API, so third parties can programmatically access and reuse the data in their own services. If this sounds like something you might find useful, do get in touch or subscribe to our Climate newsletter where we’ll be sure to share any news.

     

    Image: Romain Dancre

  3. How we got a 90-plus PageSpeed score on our 2020 annual report

    In this technical deep-dive, our designer Zarino Zappia explores how we used the latest web performance techniques to make our 2020 annual report quick and easy to download — no matter what kind of connection or device you’re using.


    Each year, mySociety produces an online annual report sharing a round-up of our achievements, a thank you to our supporters, funders, and volunteers, and our vision for the immediate future.

    For the last two years, that annual report has taken the form of a single, multi-section web page.

    This year, with more news than ever to cover across both mySociety and our commercial subsidiary SocietyWorks, we knew it was super important to get that single page running as smoothly and efficiently as possible in visitors’ web browsers.

    Our annual report is often shared as an introduction to the work mySociety does. It’s crucial that it sets a good first impression, and performance is a big part of that. It’s also an embodiment of what sets us apart from the crowd – we sweat the small stuff, to give users the best possible experience.

    What’s a PageSpeed score anyway?

    Google has done a lot of work championing fast loading websites, and their PageSpeed Insights tool is often used as a benchmark of web performance. Developers aim to achieve a mobile performance score of 90 or more – but often this is easier said than done!

    In this fairly technical blog post, I’m going to go through a few of ways we really optimised the performance of our 2020 annual report, to achieve a consistent 90-plus PageSpeed score. Through a combination of common sense, technological restraint, and a little extra effort, we manage to serve 6,500 words (a 33% increase over last year), 90 images, and two videos, over nine sections of a single page, in the blink of an eye.

    Here’s how.

    Keep it simple, stupid

    Like most mySociety products, the annual report starts with a very boring, very unfashionable, but very fast base – plain old HTML – in this case generated by Jekyll. Jekyll helps to keep our work maintainable by letting us refactor common elements out into their own includes, but since everything compiles to plain HTML, we get lightning fast page renders from our Nginx web server. We use static HTML sites a lot at mySociety – for example, for our documentation, our prototypes, and even some of our user-facing products. Sometimes simple is just better!

    Where we want to improve the user experience through interaction – things like hiding and showing the menu, and updating the URL as you scroll down the page – then we add small amounts of progressive enhancement via JavaScript. Again, like most mySociety products, the annual report treats JavaScript as a nice-to-have – everything would work without it, which means the browser can render the page quickly and our users spend less time looking at a blank white loading screen.

    On the styling side, we use Bootstrap to help us quickly pull together the frontend design work on the annual report in about a week at the beginning of December. We build our own slimmed down version of Bootstrap from the Sass source, with just the modules we need. Libraries like Bootstrap can sometimes be a bit of a sledgehammer to crack a nut, but by cherry-picking just the modules you need, you can dramatically reduce the size of your compiled CSS, without losing the flexibility and fast prototyping that a library like Bootstrap provides. The annual report makes heavy use of Bootstrap’s built-in components and utility functions, and then anything truly bespoke is handled by a few of our own custom styles laid on top.

    Use the tools at your disposal

    Performance tweaks that were cutting edge a few years ago have quickly gained widespread browser support, so if you’re not using them, you’re missing out. Most of them take pretty much no effort to set up, and will dramatically improve perceived loading time for your users.

    As a single, big, long page with almost 100 image tags, it’s imperative that we use lazy loading, to prevent the browser requesting all of those images at once on initial page load. Doing this is as easy as adding loading="lazy" to the image tags, but we also add a JavaScript fallback for browsers that don’t support the loading attribute, just to be safe. We even wrap it up in a nice Jekyll include, to make it super easy when managing the page content.

    We minify our CSS and JavaScript files, and the server is set to gzip them in transit. These things are really easy to forget, but can make a big difference to load time of a page full of static files.

    We also make an effort to efficiently encode our images – serving images at roughly the size they’re actually displayed, to reduce wasted bandwidth, and using WEBP compression with JPEG fallbacks, for the best balance of quality and filesize.

    Reduce run-time dependencies

    A big part of webpage performance is all the stuff that happens after the browser starts rendering the page. Loading external JavaScript libraries, tracking services, logging services – it all imposes a cost on your users. For some larger projects, you might decide the benefits for your team and the health of the product outweigh the network and performance cost for your users. But for the annual report we did the typically mySociety thing of reducing, reducing, reducing, to the simplest possible outcome. For example…

    We don’t load any social media JavaScript. Data is patchy on whether people actually use social sharing buttons anyway, but just in case, where we do display sharing buttons, we use old-fashioned sharing URLs that just open in a new tab. Rather than embedding tweets using Twitter’s JavaScript embed API, we build our own tweets out of HTML and CSS. Not only does this mean we’re avoiding having to load megabytes of third-party JavaScript on our page, but it also helps protect the privacy of our visitors.

    Sometimes you can’t avoid third-party JavaScript, though. For example, YouTube embeds. What you can do is defer loading the third-party JavaScript until right before it’s needed. In the case of YouTube embeds on the annual report, we use a little bit of our own JavaScript to delay loading any YouTube code until you click on a video thumbnail. To the end user, the entire thing happens instantaneously. But we know we’ve saved a whole lot of time not loading those iframes until they were really needed.

    Unusually for a mySociety project, we don’t even load jQuery on the annual report. While jQuery still makes sense for our bigger products, on something as simple as our annual report, it’s just not worth the filesize overhead. Instead we write modern, efficient JavaScript, and include small polyfills for features that might not be present in older browsers. It makes development slightly slower, but it’s worth it for a faster, smoother experience for our visitors.

    Nothing’s perfect!

    Overall, some sensible tech decisions early on, and a lot of hard work from the design team (including our new hire, Clive) resulted in a 2020 annual report I think we can be really proud of. I hope it’s also a taste of things to come, as we start introducing the same techniques into our other, bigger products.

    That said, there’s still a few things we could improve.

    We could achieve almost instant rendering on first pageload by extracting the CSS required to display the first few hundred pixels of the page, and inlining it, so it renders before the rest of the styles have been downloaded. We already do this on fixmystreet.com, for example, using Penthouse.

    There’s also an external dependency we could drop – Google Fonts. We’re currently using it to serve our corporate typeface, Source Sans Pro. The advantage of loading the typeface off Google Fonts is that most of our other sites also load the same typeface, so chances are the user will already have the font files in their browser cache. Self-hosting the fonts is unlikely to be much faster than loading from Google, but it would be a win for user privacy, since it’s one less external resource being hit for each pageload. Something to consider for future!

     

     

    Photo by Indira Tjokorda on Unsplash

  4. Open sesame! Simpler log-in forms on FixMyStreet

    When something’s not right on your street, and you’ve gone out of your way to report it to the local council, the last thing you want is to get bogged down in a complex log-in procedure.

    That’s why FixMyStreet has always put the log-in step after the reporting step, and has always allowed you to report a problem without needing an account or password at all.

    But we know we can always do better, and in the 11 years that FixMyStreet has been around, new design patterns have emerged across the web, shifting user expectations around how we prove our identities and manage our data on websites and online services.

    Over the years, we’d made small changes, informed by user feedback and A/B testing. But earlier this year, we decided to take a more holistic look at the entire log-in/sign-up process on FixMyStreet, and see whether some more fundamental changes could not only reduce the friction our users were experiencing, but help FixMyStreet actively exceed the average 2018 web user’s expectations and experiences around logging in and signing up to websites.

    One thing at a time

    Previously, FixMyStreet tried to do clever things with multi-purpose forms that could either log you in or create an account or change your password. This was a smart way to reduce the number of pages a user had to load. But now, with the vast majority of our UK users accessing FixMyStreet over high speed internet connections, our unusual combined log-in/sign-up forms simply served to break established web conventions and make parts of FixMyStreet feel strange and unfamiliar.

    In 2014 we added dedicated links to a “My account” page, and the “Change your password” form, but it still didn’t prevent a steady trickle of support emails from users understandably confused over whether they needed an account, whether they were already logged in, and how they could sign up.

    So this year, we took some of the advice we usually give to our partners and clients: do one thing per screen, and do it well. In early November, we launched dramatically simplified login and signup pages across the entire FixMyStreet network – including all of the sites we run for councils and public authorities who use FixMyStreet Pro.

    Along the way, we took careful steps—as we always do—to ensure that assistive devices are treated as first class citizens. That means everything from maintaining a sensible tab order for keyboard users, and following best practices for accessible, semantic markup for visually impaired users, to also making sure our login forms work with all the top password managers.

    Keeping you in control

    The simplified log-in page was a great step forward, but we knew the majority of FixMyStreet users never used it. Instead, they would sign up or log in during the process of reporting their problem.

    So, we needed to take some of the simplicity of our new log-in pages, and apply it to the reporting form itself.

    For a few years now, the FixMyStreet reporting form has been split into two sections – “Public details” about the problem (which are published online for all to see) followed by “Private details” about you, the reporter (which aren’t published, but are sent to the authority along with your report, so they can respond to you). This year, we decided to make the split much clearer, by dividing the form across two screens.

    Now the private details section has space to shine. Reorganised, with the email and password inputs next to each other (another convention that’s become solidified over the last five or ten years), and the “privacy level” of the inputs increasing as you proceed further down the page, the form makes much more sense.

    But to make sure you don’t feel like your report has been thrown away when it disappears off-screen, we use subtle animation, and a small “summary” of the report title and description near the top of the log-in form, to reassure you of your progress through the reporting process. The summary also acts as a logical place to return to your report’s public details, in case you want to add or amend them before you send.

    Better for everyone

    As I’ve mentioned, because FixMyStreet is an open source project, these improvements will soon be available for other FixMyStreet sites all over the UK and indeed the world. We’ve already updated FixMyStreet.com and our council partners’ sites to benefit from them, and we’ll soon be officially releasing the changes as part of FixMyStreet version 2.5, before the end of the year.

    I want to take a moment to thank everyone at mySociety who’s contributed to these improvements – including Martin, Louise C, Louise H, Matthew, Dave, and Struan – as well as the helpful feedback we’ve had from our council partners, and our users.

    We’re not finished yet though! We’re always working on improving FixMyStreet, and we’ll be keeping a keen eye on user feedback after these changes, so we can inform future improvements to FixMyStreet.com and the FixMyStreet Platform.


    Your donations keep our sites running.
    Donate now

  5. An update on our Collideoscope research

    So the last time we blogged about Collideoscope—our cycling collision and near miss reporting service, based on the FixMyStreet Platform—we’d just begun an exciting new phase of exploratory work, looking into how well the site currently meets user needs around collision prevention, and whether it could do more, for instance, in helping cyclists campaign for better safety measures, or helping police collect collision reports more efficiently.

    Since then, we’ve conducted a series of interviews, both with cyclists and campaign groups in the Merseyside area, as well as road safety data specialists from further afield, and also West Midlands Police, whose approach to cycle safety has garnered much praise over the last few years.

    One-on-one interviews are a part of the user centred design toolkit that we use a lot at mySociety, when we’re early on in a project, and just want to map out the process or problems people face, without jumping to conclusions over how they could be solved right now.

    In this case, we used the interviews to improve our understanding of five main areas:

    • The physical process of reporting a collision, or a near miss, to the police.
    • What incentives / disincentives cyclists have faced when reporting.
    • How police forces currently deal with collision reports, and near miss reports.
    • What role video recordings can play in reports / prosecutions, and what legal considerations need to be made, to prevent video damaging the case.
    • What data cycle safety campaigners currently use, and what new data they feel could improve their case when arguing for better cycling provision.

    The experiences, anecdotes, and connections we collect from interviews like these help us shape our thinking about how to build or improve our products, as well as highlighting particular avenues that need more research, or that we can start prototyping right away.

    Take video camera footage, for instance. A number of Collideoscope users have asked that we allow them to upload clips from their helmet- or handlebar-mounted cameras, along with their reports.

    But, on the other hand, we’d also heard a lot about how police forces were wary of collecting video footage, and especially worried about online videos damaging the chances of successful prosecutions in court.

    Our recent interviews showed us the line isn’t quite so clear – savvy police forces realise video evidence is hard to argue with in court, and they want people to submit videos as often as possible. In reality, if a claim reaches court, it’s not the presence of videos online that poses a problem, but the finger-pointing or speculation that often accompanies online footage in the comments section below the video, or in social media posts. This was fascinating to hear, and immediately gave us ideas as to the changes we might need to make, to protect the integrity of video evidence, if we allowed cyclists to upload clips to Collideoscope.

    It was also interesting speaking to campaigners about how the data collected by Collideoscope could help them raise the profile of cycle safety in their local areas, or on a national scale – especially data about near misses, something not covered by the UK’s official STATS19 dataset. We’re going to investigate how we could bring some of our boundary-related reporting expertise from MapIt and FixMyStreet onto Collideoscope, to help policy makers compare safety efforts in different areas, and help campaigners and councillors raise concerns over dangerous hotspots.

    Later this month, we’ll begin prototyping how some of the things we‘ve learned could work their way into Collideoscope. We’re also particularly keen to investigate the technical feasibility of integrating directly into police incident reporting products, such as the Egress-powered Operation Snap used by police forces in Wales and soon, hopefully, other forces in the UK.

    As before though, our research is by no means complete, so if you have expertise in this field, and would like to be consulted or participate in the project, we’d love to hear from you.

  6. Making better information requests in Hackney – Part II

    This is part II in a series of blog posts about our work with Hackney Council to conduct a discovery and prototyping project to improve the public-facing parts of information request processes. Read the first part here.

    With our experience of running WhatDoTheyKnow and Alaveteli, we’re big believers in the importance of information transparency laws in our democratic system. But at the same time, we understand the operational challenges of meeting the statutory requirements of these laws for public bodies.

    The challenges of dealing with information requests

    While many local authorities have dedicated information governance staff whose job it is to manage requests, finding and compiling the information is often done by hard-pressed staff within services, fitting in information-related work around their core responsibilities.

    Requesters sometimes have the expectation that information is all carefully organised in a few databases, ready to be aggregated and extracted at the click of a button. In reality, the degree to which information is well-organised varies a lot between services in a council and between different councils, often because of procurement decisions and departmental priorities stretching back many years.

    Working with Hackney Council

    We’re part way through our project with Hackney Council that Mark wrote about a few weeks ago, helping them redesign the public-facing parts of their FOI and Subject Access Request (SAR) services, so we’ve seen these challenges up close. We’ve also seen how they can be made even trickier by legacy IT systems that are no longer fit for purpose.

    At our ‘show and tell’ last week with Hackney, we shared findings from some of the research we’ve done, along with early prototypes of potential new FOI and SAR submission forms, both created collaboratively with Hackney Council staff.

    Prototyping with the GDS toolkit

    We decided to go straight from sketches to HTML-based prototypes for this project. We thought that moving in-browser with real interactive elements would make it easier to test out some of the more complicated interactions and conditional workflow of the SAR process in particular.

    Prototyping is about quickly testing hypotheses, and not getting bogged down in implementation details. So it was a pleasure to use the GDS frontend toolkit to build our prototypes. Not only did the GOV.UK toolkit help us build something relatively rich in just a few days, but it also meant we benefited from GDS’s previous work on design patterns developed for exactly this kind of context.

    Using public information to reduce FOI requests

    When submitting a request via WhatDoTheyKnow, users are automatically shown previously submitted requests that might answer their question, and we know that almost 10% of requesters have a look at these suggestions in more detail.

    As you can see here, we took this pattern from WhatDoTheyKnow, and enhanced it further in our FOI prototype.

    Now, as well as prompting with previous FOI responses from a disclosure log that we’re hoping to build, we also want to include relevant links to other topical or frequently requested information, drawn from other data sources within the council.

    If we can get this to work well, we think it could have a number of benefits:

    • Helping the person submitting the FOI get their answer more quickly
    • Reducing the number of requests that would otherwise have been sent to the council
    • Encouraging more proactive, structured publishing of data by the council.

    It’s this last point which we’re really interested in, longer term. We think using a common sense technology approach to highlight possible answers to FOI requests before they are made will get people answers more quickly, reduce the burden on responders, and reinforce efforts to proactively release commonly requested data leading to real transparency benefits.

    We’ve been doing usability testing of these prototypes over the last few days, and will be looking very closely at whether our assumptions here stand up to scrutiny.

    Reducing back and forth around SARs

    Given our background, we’re inevitably very interested in the FOI component of this project, but the Subject Access Request component is no less important. And while there’s none of the same opportunity for pre-emptively answering people’s requests, there’s plenty of scope for making the submission process a lot smoother.

    A valid FOI is just a written request and some contact details, but the information needed for a SAR to be valid is much greater and more complicated. For example, along with the specifics of the request, a solicitor asking for personal information on behalf of a parent and their 16-year old child would have to provide proof of ID and address for both family members, a letter of consent from the child, and a letter of authority from the parent, confirming that the solicitor is acting for them. Even the most straightforward SAR calls for the handling of personal data and confirmation of the requestor’s identity and address.

    Given all this complexity, it’s easy for there to be a lot of back and forth at the start of a SAR: clarifying a request, asking for documents, arranging receipt of documents, and so on.

    We’re hoping to build something that makes it much clearer to the person making the request what they’ll need to do, thereby taking some of that responsibility off the shoulders of the team managing the requests.

    process sketch showing first thoughts about SARs

    The sketch above was our first crack at something that could handle (most of) this complexity, and we’ve made a number of changes since then as our understanding of it all has increased.

    Recruiting representative users to test SAR submission has proved challenging and wasn’t helped by the Beast from the East rearing its snow-covered head. But we changed tack and successfully ran some guerilla testing in Hackney’s service centre last week, and we’re hoping to do further testing later in the project that has more chance of benefiting from contacts cultivated by individual services.

    Changing the conversation

    The conversation around information requests often focuses on the burden of responding. And although the number of information requests local authorities receive is unlikely to decrease any time soon, we’re hoping that through our collaboration with Hackney, we can make handling them a whole lot easier.

    If we do it well, we think we could eliminate a lot of the process issues created by poorly-designed legacy systems, while baking in a fundamentally more open and transparent way of operating that has the potential to benefit both citizen and state alike.

    Get involved

    If you’re responsible for managing FOI requests or data protection in your own public sector body and you’d like to follow this project in more detail—or if you’d like to participate in some of the discovery work—then please get in touch at hello@mysociety.org.

     

    Image: Sarah Palmer-Edwards

  7. Hello, is it me you’re searching for?

    There’s a common theme to a lot of mySociety sites: enter your postcode, see something that relates to you.

    From FaxYourMP—the mySociety project so old it predates mySociety itself (paradox!)—through to TheyWorkForYou, FixMyStreet, and WriteToThem, as well as a few of our commercial projects like Mapumental and Better Care, we’ve discovered that asking for a visitor’s location is a super effective way of unlocking clear, relevant information for them to act on.

    So perhaps it shouldn’t have come as a surprise that, while doing some regular monitoring of traffic on this website, we noticed a fairly significant number of people attempting to search for things like postcodes, MP names, and the topics of recent debates.

    Random sample of search terms, July–December 2017
    animal sentience corbyn
    germany CR0 2RH
    theresa may facebook
    EN3 5PB fire
    ruth davidson HG5 0UH
    eu withdrawal bill diane abbott

    By default, the search box on this site delivered results from our blog post archive (it goes all the way back to 2004 don’t you know!)… which is pretty much what you’d expect if you know how we do things here at mySociety. We have this centralised website to talk about ourselves as an organisation; then each of our projects such as TheyWorkForYou or FixMyStreet is its own separate site.

    But, looking at these search terms, it was pretty clear that an awful lot of people don’t know that… and, when you think about it, why should they?

    The most obvious solution would just have been to direct visitors towards the individual sites, so they could repeat their searches there. Job done.

    But we figured, why inconvenience you? If you’ve made it this far, we owe it to you to get you the information you need as quickly as possible.

    Handily, we’ve got rather good at detecting valid postcodes when our users enter them, so programmatically noticing when a user was searching for a location wasn’t hard. And equally handily, TheyWorkForYou offers a powerful API that lets developers exchange a user’s postcode for detailed data about the boundaries and representatives at that location.

    What do you get when you combine the two? Automatic search suggestions for TheyWorkForYou, FixMyStreet, and WriteToThem, when you enter your postcode on www.mysociety.org.

    The search page is also aware of the most frequently searched-for MPs on our site, and will offer a direct link to their TheyWorkForYou profile if you search for their names.

    And finally, if you search for something other than a postcode, we give you a single-click way to repeat your search, automatically, on TheyWorkForYou, opening up decades of parliamentary transcripts to you, with a single tap of your finger.

    It’s not a big, glamorous feature. But it’s something we know will come in useful for the few hundred people who search our site every week—possibly without their ever noticing this little bit of hand-holding as we steer them across to the site they didn’t even know they wanted. And most importantly, it should introduce a few more people to the wealth of data we hold about the decision-makers in their lives.

    Header image, Flickr user Plenuntje, CC BY-SA 2.0

  8. Talking about the mySociety design process

    Last month, I gave a talk at DotYork—a digital conference in the North of England—about the way mySociety designs and builds websites for different countries and cultures all around the world.

    If you’re into web technologies, or user research, or just generally the sort of work mySociety does, then it’s a fun 15 minute watch!

    You can download the slides from my talk here, or watch videos of all the other DotYork 2017 talks here.

    At the end of the Q&A session, I said I should write a blog post with links to some of the stuff I couldn’t fit into my talk. So here is that blog post!

    Header photo © Hewitt & Walker

  9. Designing for wordy languages at mySociety

    When someone uses mySociety software to report a street problem, or make a Freedom of Information request, it’s often in a language other than English, because our code is used to power sites all over the world.

    That’s fine: we include a facility for people to add translations to the sites they deploy, so, job done, right?

    Except, unfortunately, there’s more to it than that. However much we complain about the idiosyncrasies of our language, there’s one thing English has got going for it, and that’s conciseness. And that means that words and phrases which fit quite nicely into our designs suddenly become problematic.

    A recent front-end design ticket in Alaveteli, our Freedom of Information platform, centred around improving the display of various standard elements (the navigation bar, language switcher, logged-in user links) when the Alaveteli site in question is displaying in a language other than English.

    Here’s a picture which shows exactly why that was an issue:

    messy-alaveteli-headerIt was enough to make a designer sob.

    To put it bluntly: As soon as those carefully-crafted navigation bar links get translated, all bets are off as to whether they’ll continue to fit in the space provided. It’s an issue that’s faced by anyone creating software designed for international reuse.

    So I figured I’d share a few things the mySociety design team has learned about internationalisation, and one quick trick that I recently started using to test international language lengths on our own websites.

    The problem

    Not only are some languages more verbose than others (ie: they use more words to convey the same concept), but many use more characters per word.

    Then there are other languages which use fewer—but more complex—characters that need to be displayed larger to still remain legible.

    The W3C (which sets standards for the web) suggests that front-end developers can expect the following ratio of increase/decrease in visual text width when translating from English into this handful of common languages:

    Language Translation Ratio
    Korean 조회 0.8
    English views 1
    Chinese 次檢視 1.2
    Portuguese visualizações 2.6
    French consultations 2.6
    German -mal angesehen 2.8
    Italian visualizzazioni 3

    That’s a 150–200% increase in space required to display words in the European / South American languages that we deal with quite a lot here at mySociety.

    Often, you’re lucky, and the layout includes enough space to absorb the extra words. Headings and paragraphs of text are really good at this. Indeed, as the amount of text to be translated gets bigger, you notice that the translation has less effect on space, as the W3C, again, notes:

    No. of characters in English source Average expansion
    Up to 10 characters 200–300%
    11–20 characters 180–200%
    21–30 characters 160–180%
    31–50 characters 140–160%
    51–70 characters 151-170%
    Over 70 characters 130%

    So—no need to worry—it’s just short little bits of text that hurt the most. Phew.

    Hang on, short little bits of text… like all those buttons and links all over every single website mySociety makes?

    romg

    Don’t panic!

    That’s what mySociety has designers for 🙂

    There are lots of tricks we can use to reinforce our layouts to better handle long strings. For instance, where possible, we avoid creating horizontally-constrained navigation bars.

    And in some cases, we can use modern styling techniques like Flexbox to better handle overflowing text without harming legibility or the overall layout of the page.

    But testing the effectiveness of these techniques can take time and, while we have a fantastic network of volunteers and international partners who translate our open source projects, we’re often working on the initial layout and styling before that has a chance to happen.

    While I was working out fixes for the Alaveteli user links and language picker dropdown, I threw together a quick “pseudolocalize” function that temporarily makes the text longer, so we could preview how it’ll look once it gets translated.

    pseudolocalization-code-snippet

    Only later did I discover that “Pseudolocalization” is, apparently, a real thing, originating from the Windows developer community.

    Typically existing Pseudolocalization functions would do all sorts of orthographic substitutions to test how weird characters are displayed, as well as padding the strings to make them longer. So, something like Account Settings would be transformed into [!!! Àççôûñţ Šéţţîñĝš !!!].

    My little function skips the weird character substitutions, and instead just doubles the text content of any elements you tell it to.

    So you can run…

    pseudolocalize('.navigation a')

    …in your browser console, to turn this…

    before

    …into this!

    after

    Yep, it’s useful and it’s ridiculous — our favourite combination.

    Plus, it’s super fast, and it works with nested elements, so if you were totally crazy, you could just run it on the entire 'body' and be done with it!

    pseudolocalize('body')
    psuedolocalized

    Now, we’re not saying we’ll be able to cope with, say, the longest word in Sanskrit, which is 431 letters long, but this approach does make us pretty confident that we’ve got a great basis for whatever most languages can throw at us.

    If you’re a web developer with similarly ingenious tricks for improving the internationalization of your sites, share them in the comments box!

    Photo of Nepalese prayer wheels by Greg WillisCC BY-SA 2.0