We’ve recently introduced two new ways to locate yourself, and your reports, on FixMyStreet.
From up in the air
You might have noticed a discreet little ‘aerial’ button on the bottom of FixMyStreet’s map pages recently.
This toggles the view from the usual Ordnance Survey maps to a Bing aerial satellite view:
We hope this will make it easier for people to locate their reports accurately, in those cases where it’s a bit easier to identify landmarks from above.
This isn’t an entirely new departure for FixMyStreet: as far back as 2013 the site we made for the City of Zurich had a satellite view as default — and indeed, it still does.
At the moment, this feature is available on the nationwide fixmystreet.com, and on fifteen client authorities’ sites. Why not all authorities’ implementations? It’s basically to do with whether they have their own map servers: where we host the maps, it’s obviously more straightforward for us to deliver the alternative view.
Open Location Codes
Another option to help you find just the right spot for your report comes with the introduction of Open Location Codes, also known as OLCs or Plus Codes.
Coincidentally, these also have a connection with Zurich, as they were developed in Google’s offices there. They’re basically a more convenient and quicker way of entering latitude and longitude, and can be used to identify any spot on the planet (though of course, each FixMyStreet site has its own bounds).
As their name suggests, OLCs are open source and available for anyone to use. Want to try it out? Google Maps on mobile gives you an OLC when you drop a pin: see more details here.
This function adds to the number of ways you can search for a location on FixMyStreet from the homepage search box, which include inputting a postcode, a street name, an area, a town or city, latitude and longitude, and allowing the site to auto-locate you.
So here’s hoping these developments will allow for ever more accuracy in report locations.
Image: William Hook
So we wanted to build an app for FixMyStreet. Easy: we just had to make a cut-down version of the website, right?
Hmm, not quite.
In our two recent posts, we announced the new FixMyStreet apps, and their developer Struan went into some technical detail about why and how we built them.
Now he explains a little more about what informed the decisions he made during the apps’ development.
Moving the map, not the pin
When you use the desktop version of FixMyStreet, the first thing it asks for is your location, and there’s a good reason for that. It’s such a good reason that it needed to apply to the app as well.
On the desktop site we ask you to input your postcode or street name. With a mobile app, it’s much more likely that you’ll be reporting a problem that’s right in front of you, so we can usually skip that step and show you a map of your current location.
However, while the accuracy of geolocation technology is pretty good, it’s not perfect, so we wanted to let users fine-tune the location.
On the website you click on the map to drop a pin where the problem is, but we’ve found this isn’t the best solution on a small screen. Fingers are big and the end of a pin is small so it can take several clicks to correctly position the pin.
We quickly realised that having a central static crosshair, and moving the map to the location was a much easier and more accurate way to set a location.
Sending reports made offline
As we explained in the previous post, one of the benefits of the apps over the mobile site is that you can make reports even if you have no phone coverage. The app stores all the details until you get back within range and you’re ready to send it off.
One decision we made, which might seem initially puzzling, is that these offline reports don’t automatically get sent off to the council once you’re back within range of a phone or wifi signal.
There are two linked reasons for this, and they’re both related to the fact that FixMyStreet lets you report a problem even if you don’t know who to send it to.
Simply, before we can work out who to send the report to, we need to know exactly where you are – and that you are within FixMyStreet’s area of coverage (ie, within the UK).
Your location also dictates the categories that we show you. Each council has its own categories, and in areas covered by two tiers of government, each council will deal with different types of report. So for example, your county council might deal with potholes, while your district council handles dog fouling.
Once you’re back online we can check that the location is one we can accept a report about, and then fetch the list of categories for you to pick from.
In effect, this delay is also a second chance for you to check your report before you send it off, although that was never the reason for the decision!
The constant map
The final decision we made was to leave the map in the background at every step of the app process.
We initially designed it with the map only appearing when you needed it, but having the map underlying the reporting process provides a nice bit of continuity with the website, and seemed to make the app cohere better too. So, while there’s no particular reason for it to be there, we made the decision to keep things uniform.
If anything else about the app has got you wondering, do feel free to leave a comment below!
Photos by Cali4Beach and Christopher Bulle, and adapted from an image by Premshree Pillai (CC)
Yesterday, we announced the new FixMyStreet apps (download here for Apple devices and here for Android).
In this post, we want to explain a bit more about why we spent time and effort on making them, when normally we advocate for mobile websites.
Plus, for our technical audience, we’ll explain some of the tools we used in the build.
The why bit
When we redesigned FixMyStreet last year, one of the goals was to provide a first class experience for people using the website on their mobile phone browsers. We’re pretty happy with the result.
We’re also believers in only building an app if it offers you something a mobile website can’t. There are lots of reasons for that: for a start, apps have to be installed, adding a hurdle at which people may just give up. They’re time-consuming to build, and you probably can’t cater for every type of phone out there.
Given that, why did we spend time on making a mobile app? Well, firstly, potholes aren’t always in places with a good mobile signal, so we wanted to provide a way to start reporting problems offline and then complete them later.
Secondly, when you’re on the move you often get interrupted, so you might well start reporting a problem and then become distracted. When that happens, reports in browsers may get lost, so we wanted an app that would save it as you went along, and allow you to come back to it later.
And the how bit (with added technical details, for the interested)
Having decided to build an app the next question is how to build it. The main decision is whether to use the native tools for each phone operating system, or to use one of the various toolkits that allow you to re-use code across multiple operating systems.
We quickly decided on using Phonegap, a cross platform toolkit, for several reasons: we’d already used Phonegap successfully in the past to build an app for Channel 4’s Great British Property Scandal (that won an award, so clearly something went right) and for Züri Wie Neu in Zurich, so it was an easy decision to use it again.
We decided to focus on apps for Android and iOS, as these are the two most popular operating systems. Even with this limitation, there is a lot of variety in the size and capability of devices that could be running the app – think iPads and tablets – but we decided to focus primarily on providing a good experience for people using phone-sized devices. This decision was partly informed by the resources we have at hand, but the main decider was that we mostly expect the app to be used on phones.
There was one big challenge: the functionality that allows you to take photos in-app. We just couldn’t get it to work with older versions of Android – and it’s still not really adequate. We just hope most people are updating their operating systems! Later versions of Android (and iOS) were considerably less frustrating, and perhaps an earlier decision to focus on these first would have led to a shorter development process.
So, there you have it. The app does what we wanted it to (on the majority of systems), and you can download it right now (Android users go here; iOS here).
On balance though? We’d still advocate a mobile-optimised browser site almost every time. But sometimes circumstances dictate – like they did for FixMyStreet – that you really need an app.
We’d give you the same advice, too, if you asked us. And we’d happily build you an app, or a mobile-friendly site, whichever was more suitable.
Photos by Tup Wanders and Nicholas A Tonelli (CC)
21st to 29th July is Love Parks Week – a campaign that raises awareness of the importance of green spaces in local communities.
This year, they are focusing on “healthy” parks – that is to say, parks that are safe, clean, well-maintained, a home for wildlife, and a hub for the local community.
The Love Parks website features a quick online health check which you can complete for your local facilities.
If your local park isn’t up to scratch, this is a great week to report it via FixMyStreet. You might think that you can only report problems on streets and roads, but in fact the automatic geolocation function, and the zoomable, draggable maps mean that you can also pinpoint issues very precisely, even if they’re on open ground.
So, if your playground equipment is rusty, the pond is full of algae, or the dog bin’s overflowing, you know where to turn.
Image by Barbara Mazz, used with thanks under the Creative commons licence.
Last month saw the launch of not one but two websites asking the public to report empty properties to the relevant council. First off the blocks was mySociety’s ReportEmptyHomes.com, commissioned by the Empty Homes Agency, followed shortly afterwards by EveryHomeCounts.info from a group of eight councils in Surrey and Hampshire. Since mySociety claims to want to show the public sector how to use the internet properly, I thought it might be interesting to compare the two sites, at least from a user’s perspective.
I’m going to imagine I was walking down, say, Fosters Lane in Knaphill, Surrey, and I noticed that the house on the corner next to the chip shop was in a state of disrepair.* I snapped a picture on my mobile phone, and I want to send it to the council to see if they can do something about it.
[*I ought to just add that this is entirely fictional. I’ve never been to Knaphill, I’ve no idea whether there’s a chippy on Fosters Lane, and even if there is, the house next to it probably belongs to a lovely couple. Please don’t go taking pictures of their house for the council.]
So, first up: EveryHomeCounts.info. Clicking the big red REPORTING AN EMPTY PROPERTY button takes me to a page of text telling me why the council might like people to report empty properties, although presumably if I’ve got as far as finding the website and clicking the big red button, I’m already convinced of the case. At the bottom of the text I’m invited to “click here” to report an empty property.
On the next page I’m asked for… a whole load of personal information. I want to tell you about an empty house; do I really need to declare my title, first name, surname, house name, house number, street, locality, town, county, postcode, country, telephone number, and email address before doing so? Well, as it turns out, no — they only insist on an email address (although the single letter “f” was accepted as a valid email address).
On to page four, and I’m finally asked for the address of the house. I suppose “house on the corner next to the chippy, Queens Road, Knaphill” would probably be enough for the council to identify it. But then — get this — they want me to tell them which borough council might be responsible for this address, so that the report can be sent to the right place! Unless I happen to live in that street, how would I know? Even if I could have an educated guess, it might be near a boundary, or just over a boundary… Leaving the field blank isn’t allowed, and there’s no option that says “I’m not sure, sorry” — I’m told in red ink that I must specify a council if I want to continue filing my report.
Finally, I reach a screen that says at the top, “Thank you. You have reached the end of this form. blah blah” The second paragraph says, “What will happen next? The council will process your form. You will receive an email blah blah.” So, I pat myself on the back, turn off the computer and go for a walk. Except that if I’d scrolled down the page, I would have seen “submit” button, along with the “review” and “cancel” buttons. My form hasn’t been submitted, and I’ve wasted half an hour filling in a form that’s been thrown away.
Now, what would have happened if I’d gone to ReportEmptyHomes.com instead?
The top of the front page asks me for a postcode, street name or area. I enter “Fosters Lane, Knaphill” and hit enter. This brings up an Ordnance Survey map with Fosters Lane in the middle of it, and I click on the offending property. The text on the page immediately changes and tells me that this problem falls in the area of Woking Borough Council, and I’m asked for a description of the property, a photo if I’ve got one to upload, my name, email and phone number.
Having filled in the information and clicked “submit”, I’m told to go off and check my emails, where I’ll find a confirmation link to click. This finalises the report.
So, how do the two sites compare? The mySociety site certainly gets the user through the process quicker, and offers maps and photos to boot. It helps the user greatly by taking responsibility for finding the right council, and does so for the whole country too, not just for a couple of counties in the south. On the down side, one could question why it’s so important to verify the user’s email address before filing the report; waiting for a confirmation link by email adds an extra hurdle which will probably trip at least some users, so why do it?
Also, EveryHomeCounts.info isn’t just for filing reports about empty homes; it contains information on buying, selling, owning and letting them too, providing ways for local people to perhaps make use of empty properties without enlisting the council’s help at all, which can only be a good thing.
To be fair, the councils concerned should be applauded for taking the initiative to launch this service, and I hope it proves to be a worthwhile use of council tax money. It’s great to see public bodies using the internet in innovative ways to try to make concrete improvements in people’s immediate environment. It appears though that mySociety have shown that it can be done better, and for the whole country, and probably more cheaply to boot.
Whereas new sites are lovely, and I talk about Neighbourhood Fix-It improvements further down, there’s still quite a bit of work that needs to go into making sure our current sites are always up-to-date, working, and full of the joys of spring. Here’s a bit of what I’ve been up to recently, whilst everyone else chats about database upgrades, server memory, and statistics.
The elections last week meant much of WriteToThem has had to be switched off until we can add the new election results – that means the following aren’t currently contactable: the Scottish Parliament; the Welsh Assembly; every English metropolitan borough, unitary authority, and district council (bar seven); and every Scottish council. The fact that the electoral geography has changed a lot in Wales means there will almost certainly be complicated shenanigans for us in the near future so that our postcode lookup continues to return the correct results as much as possible.
Talking of postcode lookups, I also noticed yesterday that some Northern Ireland postcodes were returning incorrect results, which was caused by some out of date entries left lying around in our MaPit postcode-to-area database. Soon purged, but that led me to spot that Gerry Adams had been deleted from our database! Odd, I thought, and tracked it down to the fact our internal CSV file of MLAs had lost its header line, and so poor Mr Adams was heroically taking its place. He should be back now.
A Catalan news article about PledgeBank brought a couple of requests for new countries to be added to our list on PledgeBank. We’re sticking to the ISO 3166-1 list of country codes, but the requests led us to spot that Jersey, Guernsey and the Isle of Man had been given full entry status in that list and so needed added to our own. I’m hoping the interest will lead to a Catalan translation of the site; we should hopefully also have Chinese and Belarussian soon, which will be great.
Neighbourhood Fix-It update
New features are still being added to Neighbourhood Fix-It.
Questionnaires are now being sent out to people who create problems four weeks after their problem is sent to the council, asking them to check the status of their problem and thereby keep the site up-to-date. Adding the questionnaire functionality threw up a number of bugs elsewhere – the worst of which was that we would be sending email alerts to people whether their alert had been confirmed or not. Thankfully, there hadn’t yet been any such alert, phew.
Lastly, the Fix-It RSS feeds now have GeoRSS too, which means you can easily plot them on a Google map.
So, PledgeBank got Slashdotted a couple of weeks ago when Mike Liveright’s $100 laptop pledge was linked from a post about the laptop. We didn’t cope very well.
Unfortunately, PledgeBank is a pretty slow site. Generating the individual pledge page (done by mysociety/pb/web/ref-index.php) can take anything up to 150ms. That’s astonishingly slow, given the speed of a modern computer. What takes the time?
It’s quite hard to benchmark pages on a running web server, but one approach that I’ve found useful in the past is to use an analogue of phase-sensitive detection. Conveniently enough, all the different components of the site — the webserver, the database and the PHP process — run as different users, so you can easily count up the CPU time being used by the different components during an interval. To benchmark a page, then, request it a few times and compute the amount of CPU time used during those requests. Then sleep for the same amount of time, and compute the amount of CPU time used by the various processes while you were sleeping. The difference between the values is an estimate of the amount of CPU time taken servicing your requests; by repeating this, a more accurate estimate can be obtained. Here are the results after a few hundred requests to http://www.pledgebank.com/100laptop, expressed as CPU time per request in ms:
Subsystem User System apache ~0 ~0 PostgreSQL 55±9 6±4 PHP 83±8 4±4
(The code to do the measurements — Linux-specific, I’m afraid — is in mysociety/bin/psdbench.)
So that’s pretty shocking. Obviously if you spend 150ms of CPU time on generating a page then the maximum rate at which you can serve users is ~1,000 / 150 requests/second/CPU, which is pretty miserable given that Slashdot can relatively easily drive 50 requests/second. But the really astonishing thing about these numbers is the ~83ms spent in the PHP interpreter. What’s it doing?
The answer, it turns out, is… parsing PHP code! Benchmarking a page which consists only of this:
<? /* ... */ require_once '../conf/general'; require_once '../../phplib/db.php'; require_once '../../phplib/conditional.php'; require_once '../phplib/pb.php'; require_once '../phplib/fns.php'; require_once '../phplib/pledge.php'; require_once '../phplib/comments.php'; require_once '../../phplib/utility.php'; exit; ?>
reveals that simply parsing the libraries we include in the page takes about 35ms per page view! PHP, of course, doesn’t parse the code once and then run the bytecode in a virtual machine for each page request, because that would be too much like a real programming language (and would also cut into Zend’s market for its “accelerator” product, which is just an implementation of this obvious idea for PHP).
So this is bad news. The neatest approach to fixing this kind of performance problem is to stick a web cache like squid in front of the main web site; since the pledge page changes only when a user signs the pledge, or a new comment is posted, events which typically don’t occur anywhere near as frequently as the page is viewed, most hits ought to be servable from the cache, which can be done very quickly indeed. But it’s no good to allow the pledge page to just sit in cache for some fixed period of time (because that would be confusing to users who’ve just signed the pledge or written a comment, an effect familiar to readers of the countless “Movable Type” web logs which are adorned with warnings like, “Your comment may take a few seconds to appear — please don’t submit twice”). So to do this properly we have to modify the pledge page to handle a conditional GET (with an If-Modified-Since: or If-None-Match: header) and quickly return a “304 Not Modified” response to the cache if the page hasn’t changed. Unfortunately if PHP is going to take 35ms to process such a request (ignoring any time in the database), that still means only 20 to 30 requests/second, which is better but still not great.
(For comparison, a mockup of a perl program to process conditional GETs for the pledge page can serve each one in about 3ms, which isn’t much more than the database queries it uses take on their own. Basically that’s because the perl interpreter only has to parse the code once, and then it runs in a loop accepting and processing requests on its own.)
However, since we unfortunately don’t have time to rewrite the performance-critical bits of PledgeBank in a real language, the best we can do is to try to cut the number of lines of library code that the site has to parse on each page view. That’s reduced the optimal case for the pledge page — where the pledge has not changed — to this:
<? /* ... */ require_once '../conf/general'; require_once '../../phplib/conditional.php'; require_once '../../phplib/db.php'; /* Short-circuit the conditional GET as soon as possible -- parsing the rest of * the includes is costly. */ if (array_key_exists('ref', $_GET) && ($id = db_getOne('select id from pledges where ref = ?', $_GET['ref'])) && cond_maybe_respond(intval(db_getOne('select extract(epoch from pledge_last_change_time(?))', $id)))) exit(); /* ... */ ?>
— that, and a rewrite of our database library so that it didn’t use the gigantic and buggy PEAR one, has got us up to somewhere between 60 and 100 reqs/sec, which while not great is enough that we should be able to cope with another similar Slashdotting.
For other pages where interactivity isn’t so important, life is much easier: we can just emit a “Cache-Control: max-age=…” header, which tells squid that it can re-use that copy of the page for however long we specify. That means squid can serve that page at about 350reqs/sec; unfortunately the front page isn’t all that important (most users come to PledgeBank for a specific pledge).
There’s a subtlety to using squid in this kind of (“accelerator”) application which I hadn’t really thought about before. What page you get for a particular URL on PledgeBank (as on lots of other sites) vary based on the content of various headers sent by the user, such as cookies, preferred languages, etc.; for instance, if you have a login cookie, you’ll see a “log out” link which isn’t there if you’re an anonymous user. HTTP is set up to handle this kind of situation through the Vary: header, which the server sends to tell clients and proxies on which headers in the request the content of the response depends. So, if you have login cookies, you should say, “Vary: Cookie”, and if you do content-negotiation for different languages, “Vary: Accept-Language” or whatever.
PledgeBank has another problem. If the user doesn’t have a cookie saying which country they want to see pledges for, the site tries to guess, based on their IP address. Obviously that makes almost all PledgeBank pages potentially uncachable — the Vary: mechanism can’t express this dependency. That’s not a lot of help when your site gets featured on Slashdot!
The (desperately ugly) solution? Patch squid to invent a header in each client request, X-GeoIP-Country:, which says which country the client’s IP address maps to, and then name that in the Vary: header of the outgoing pledges. It’s horrid, but it seems to work.
So, it’s my turn to write something here. Well, this’ll be short. As Francis mentions below, PledgeBank launched today, and everything’s gone reasonably smoothly, with the exception of some tedious PHP bug we haven’t tracked down yet. With any luck the new version of PHP will fix it, so there won’t be hours of painful debugging to do. (I could bore you for hours with my opinions of PHP — actually, I could probably shorten that quite a lot if I were allowed to swear, but this is a family-friendly ‘blog — but let’s just say that fixing PHP problems Is Not My Favourite Job.)
Instead I’ll say what I’m doing right now, which is beginning to add geographical lookup to PledgeBank. At the moment we ask pledge authors for a country (though it can either be “UK” or “global” at the moment), and, if they’re in the UK, a postcode. The idea is to “georeference” (i.e. look up the coordinates of) the postcode, though we don’t actually do that yet. So I’m modifying the database a bit to store coordinates (as a latitude/longitude, so that we don’t have to write a separate case for every wacky national coordinate grid) and generalise the notion of “country” so that we can let non-Brits actually put in their own countries when they create pledges.
Other things we’ve discovered today:
- People are confused by the “(suspicious signer?)” link next to signatures on each pledge page — several people thought that we were reporting our suspicions of the signer. You probably think that’s stupid, but if so that’s only because you’re familiar with sites that have this sort of retroactive moderation button everywhere. Actually it’s us that’s being stupid and we’re going to remove it until we have a better way to implement it — at the moment we think we’re mostly on top of the occasional joke/abusive signature.
- People are confused by the pledge signature confirmation mail, which currently reads (for instance),
Please click on the link below to confirm your signature on the pledge at the bottom of this email.
The pledge reads:
‘Phil Booth will refuse to register for an ID card and will donate £10 to a legal defence fund but only if 10,000 other people will also make this same pledge.’
— the PledgeBank.com team
We got several emails from people saying “your site has got my name wrong — I’m not Phil Booth”. The point is that Phil Booth wrote the pledge, so it’s in his name; the email reflects that. But that’s not obvious to the signer, and since the only name in the body of the email isn’t theirs, they think it’s got it wrong and complain. (To be fair, only three out of ~1,100 did, but that’s still bad.) This is the sort of problem we need user testing to spot: none of us saw anything wrong with the text when we were testing it. So we need to reword that.
- We’ve had several people email to say that they’d like to do versions of PledgeBank in their own countries, and we’d like to hear from anyone interested in localising the mySociety projects who has time, expertise or even just opinions to donate. If that’s you, please get in touch!
And probably some other stuff, but I said this post would be short….