On Twitter about 15 minutes ago, @greenerleith asked: “Has anyone worked out how to display the most recent #fixmystreet reports on a local map widget that can be embedded? #hyperlocal”
View Larger Map
It’s very simple to do:
- Go to FixMyStreet, and locate any RSS feed of the latest reports you want (for the above map, I used Edinburgh Waverley’s postcode of EH1 1BB; you could have used reports to a particular council, or ward, using the Local alerts section). Copy the URL of the RSS feed.
- Go to Google Maps, paste the RSS feed URL into its search box, and click Search Maps.
- Click the “Link” link to the top right of the map, and copy the “Paste HTML to embed in website” code.
- Paste that code into your blog post, sidebar, or wherever (you can alter the code to change its size etc.).
The latest reports from FixMyStreet, superimposed on a Google Map, embedded in your blog. Hope that’s helpful.
When a bit of government forwards or attaches emails using Outlook, they get sent using a special, strange Microsoft email format. Up until now, WhatDoTheyKnow couldn’t decode it. You’d just see a weird attachment on the response to your Freedom of Information request, and probably not be able to do anything with it.
Peter Collingbourne got fed up with this, and luckily for us, he can code too. He forked our source code repository, and made a nice patch in his own copy of it.
He then told us about it, and I merged his changes into the main WhatDoTheyKnow code, tested them out on my laptop, then made them live. It all work perfectly first time. Peter even added the new dependency on vpim to WhatDoTheyKnow conf/packages.
Now if you go to an Outlook attachment on WhatDoTheyKnow,
such as this one you’ll just see the files, and be able to download them, and view them as HTML as normal. They’ll also get indexed by the search (although I need to do a rebuild for that for it to work with old requests).
If you want to have a go making an improvement to a mySociety site, you can get the code for most of them from our github repositories. For some sites, there’s an INSTALL.txt file explaining how to get a development environment set up. Let us know if you do anything – even incremental improvements to installation instructions are really useful. And new, useful, features like Peter’s are even more so.
I’ve been doing lots of research around “cloud computing” recently, so we can change how Mapumental works and take it out of private beta.
One thing that’s struck me is that there doesn’t seem to be a proper, industry standard name to distinguish what to me are two fundamentally different sorts of “cloud computing”. I’m focusing here entirely on cloud services for programmers (let’s leave what it means to end users or businesses for another day).
Here are my own names and descriptions of them:
1) Cloud hardware server provision (Cloud HSP)
Low level APIs for making and destroying (virtual) servers, and loading machine images onto them. e.g. Amazon Elastic Compute Cloud, Rackspace Cloud Servers, Eucalyptus’s EC2 bits. Basically, what Eucalyptus v 1.5 can do and what libcloud should do. (By analogy, this is the assembly language of cloud computing)
2) Cloud developer service provision (Cloud DSP) A service that a developer accesses with one name and a simple API, and behind the scenes it scales for him, automatically. e.g. Amazon Queue Service, Rackspace Cloud Files. (By analogy, this layer is the C programming language of cloud computing)
[as an aside, Google AppEngine is an interesting one. It is definitely in the Cloud DSP category, but I think it is larger than that - it is a whole set of APIs all in that category. Something like Google DataStore is a single Cloud DSP, albeit one apparently only accessible within AppEngine apps]
It’s possible to use a Cloud HSP (assembly language), along with a bunch of your own software or open source software, to build new Cloud DSPs (C code). Right now this is pretty hard – even quite well known open source distributed datasbases like CouchDB still need scripting to even make them replicate. The code that makes and destroys servers and gives the service one name, needs manually stringing with quite new bits of wire (things like scalr and Wackamole).
For this reason, I’m reluctant for mySociety to get into the “making our own Cloud DSP out of Cloud HSP” game. It feels to me like a suck of time, and like we wouldn’t be able to guarantee without lots of careful and expensive testing that it would scale. I’m more tempted to use the commercial Cloud DSP services where possible, even though they are proprietary. But use them via our own abstraction layer, so we can change as we need to. Of course, we have some C++ code (the public transport route finder), so will have to use the Cloud HSP API to get that going, perhaps with Amazon’s Auto Scaling. But it can jolly well use AQS and S3 to talk to other services.
So, what do you think about the names Cloud HSP/DSP? Are there already existing names for the distinction that I’m making? Is it a useful distinction for you? Can you think of better names?
WhatDoTheyKnow keeps growing and growing, sucking people in from Google as its archive of maybe 8.5% of Freedom of Information requests gets more and more detailed.
(Graph of number of FOI requests made using WhatDoTheyKnow over time; click for larger version)
There’s round about 8Gb of unfettered Government data in the core database, plus a whole bunch more for indexing and caching. For comparison, TheyWorkForYou (which now goes back to 1935) has 12Gb. And it’s catching up on traffic also – WhatDoTheyKnow has about half the number of visitors as TheyWorkForYou.
Unfortunately, this new found traffic has led to performance problems. You might have seen errors when using WhatDoTheyKnow in the last week or two. This post is firstly an apology for that. Thank you for your patience. Hopefully it is fixed now – do let us know if you get problems still. And secondly it is some techy stuff about debugging such problems in Ruby on Rails…
When WhatDoTheyKnow started failing, we did the obvious things to start with – moving the database to a separate server, and moving some other services off the same server, to give WDTK more room to breathe. It still kept breaking.
None of my server monitoring tools shed any very clear light as to the problem. I upgraded to the latest version of Passenger, the best Rails deployment tool I’ve seen yet. It’s pretty good, but still not mature enough for my liking. I was still getting the same problems with it, but reporting tools like passenger-memory-stats were really helpful.
Eventually I worked out that it was to do with memory use of the Rails processes. Individual ones would leap up to 1Gb, and never drop back down. If several did, the server (with 4Gb of RAM) would start swapping and grind to a halt. The world of Ruby and Rails memory monitoring software is patchwork at best, and in the end I found the simplest tools the most useful. Here’s some:
- I found some Rails processes were getting jammed, and not dieing even when I restarted Apache. I think in the end this was due to the Passenger spawning method, and our use of the Xapian Ruby module. Running Passenger in RailsSpawnMethod conservative mode made things much more robust.
- Monit, which in a previous life had a job holding up vital structural pillars of buildings with duct tape, makes you feel dirty. Actually it is really useful. Given I couldn’t quickly fix the problem, Monit let me at least reduce the suffering for people trying to use the site meanwhile. Here’s the rule I used, which gives Apache a kick every time server memory use is too high. It was firing every 5 or 10 minutes…
check system localhost
if memory > 3500 MB then exec "/usr/sbin/apache2ctl graceful"
- I found memory_profiler on a blog. It helps you find the kind of memory leak where you unintentionally continue to reference an object you don’t use any more. With a specialist subject of string objects. This led to a fix to do with declaring static arrays in classes vs. modules, which I still don’t really understand. But it wasn’t the cause of the big 1Gb memory munching, there were no large enough leaks of this sort.
- The record_memory function in WDTK’s application controller came from another blog. It’s handy as it shows you how much of the system memory in the Ruby process each request causes an increase by. With caveats, this was the best way for me to identify the most damaging requests (search results, and certain public body pages). And it also brought focus on the actual problem – the peak memory use during a request. That’s really important, because Ruby’s memory manager never returns memory to the operating system… The Gb leaps in memory use were because of temporary memory used during certain requests, which the Ruby memory manager then never frees later.
- I made a bunch of functions culminating in allocated_string_size_around_gc. This was really useful in use with the “just add lots of print statements and fiddle” school of debugging. Not everyone’s favourite school, but if your test code can’t catch it, one I often end up using (it gets really involved rarely enough that it doesn’t seem worth setting up an interactive debugger). It led me to various peak memory savings, such as calling “text.gsub!” rather than “text = text.gsub” while removing (email addresses and private information) from FOI request responses, which help quite a bit when dealing with multi-megabyte attachments.
- Finally, I used the overlooked debugging tool, and the one you should never rely on, being common sense. That is, common sense informed by days of careful use of all the other tools. In order to quickly show text extracts when searching, WDTK stores the extracted attachment text in the database. A few of these attachments are quite large, and led to 50Mb fields, often several of which were being loaded and processed in one page request. That this would cause a high peak of memory use all became just obvious to me some time yesterday. I checked that that was the case, and this morning, I changed it to use the full text for indexing, but to at most keep 1Mb for use in snippets. So sometimes now you won’t get a good search extract for queries, but it is rare, and it will at least still return the right result.
I’ve more work to do, I think there are quite a few other quick wins, all of which are making the site faster too. I’m quite happy that WhatDoTheyKnow also has a bunch more test code as a result of all this.
On the other hand, what a disappointing disaster for open source languages beginning with P/R (as opposed to J). Yes, the help and tools were just about there to work it out, but would seem primitive if you’d used say Java’s Memory Analyzer. Indeed somebody over on StackOverflow suggested running your site in JRuby and using exactly that tool…
A number of people report dog fouling through FixMyStreet, using slightly more… colloquial language. A number of councils have strict obscenity filters, blocking anything containing swearing. As I’m a pragmatist and not that interested in campaigning against councils blocking legitimate emails from their citizens (feel free!), FixMyStreet simply changes any “dog shit” reference to “dog poo”. This works well for everyone.
Recently, the infamous Intellectual Property Manager from Portakabin™ Limited got in touch to complain about a couple of reports on FixMyStreet containing the words “portacabin” or “portaloo”. Again, as a pragmatist, I’m not really interested in whether users using trade marks or trade mark variants in a generic way on a problem report actually constitutes trade mark infringment (actually, I’d guess not), I just want legal people to go away and not waste our precious resources. So from now on, any report containing portakabin or similar will become [portable cabin], and portaloo will become [portable loo].
For anyone who’s interested, this is accomplished through a simple regular expression, that looks for porta followed by 0 or more spaces, then cabin, kabin, or loo, and sticks “ble” in the middle.
As a new edition has just been released, and I’ve had to tweak the parser to cope with the new highlighting, it’s a good time to write a brief article on TheyWorkForYou’s handling of the House of Commons Register of Members’ Financial Interests (Register of Members’ Interests as was before the current edition). Way back in the day, a scraper/parser was written (by either Julian or Francis) that monitors the Register pages on www.parliament.uk for new editions, and downloads and broadly parses the HTML into machine-readable data. The XML produced can be found at http://ukparse.kforge.net/parldata/scrapedxml/regmem/ – TheyWorkForYou then pulls in this XML into its database, and makes the latest data available on every MP’s page.
However, as it’s been scraping/parsing the Register since 2000, we can do more than that. Each MP’s page contains a link to a page giving the history of their entry in the Register – when things were added, removed, or changed. You can also view the differences between one edition of the Register and the next, or view a particular edition in a prettier form than the official site. There’s a central page containing everything Register-related at http://www.theyworkforyou.com/regmem/
We’ve had reports that our FixMyStreet iPhone app is crashing on iPhone 3.0, and so have withdrawn it from the App Store until we are able to find out what’s wrong and fix it. I’m afraid I don’t know when that will be, as it’s all rather busy at present – if anyone has the skills and would like to volunteer to help, the code is available and should just import into XCode. I can supply some crash logs too.
Richard Pope has been redesigning mySociety’s biggest site TheyWorkForYou.com for a couple of months.
He’s done a heroic job, as has Matthew with his epic import of Hansard data from 1935 onwards. TheyWorkForYou is a much better site for their combined work recently. We’ll be writing more on the historic stuff soon.
There are a few things I’d like from you as a member of the mySociety community:
1. Please say a big thanks to Richard. This was not an easy or relaxing task at all, and he’s done it brilliantly. Just check a Lords debate to see the attention to detail. We are a very lucky organisation to have him, as he’s always in demand.
2. Please give some constructive criticism on how it could be even better (please note, focussing on design here, we already have a load of feature priorities to deliver).
3. Anyone who could help supply a redesigned logo, or some nicely processed parliamentary-themed artwork to sit in the background grey-boxes on the homepage would be doing a very Good Deed for mySociety.
And lastly, please do pledge to become a TheyWorkForYou Patron, so we can keep doing things like this in the future!
They could perhaps have picked a better day, as it was quite serious – at the stroke of midnight on the 1st of April, 37 district councils and 7 county councils in England ceased to exist, replaced by 9 new unitary authorities. This means people in Durham, Northumberland, Cornwall, Shropshire, Wiltshire, Chesire, and Bedfordshire only have one principal local authority to deal with now. The Wikipedia article on the changes has more information on the background to this change.
Obviously this meant some work for WriteToThem and FixMyStreet, both of which require up-to-date local council information. Our database of voting areas, MaPit, has “generations”, so we can keep old areas around for various historical purposes. So firstly, I created a new generation and updated all the areas that weren’t affected to the new generation. Next, six of the new unitary authorities (all the counties except Cheshire and Bedfordshire, plus Bedford) share their boundaries and wards with the coterminous councils they’re replacing, so for them it was a simple matter of updating those councils to be unitary authorities.
That left Bedfordshire and Cheshire. I created areas for the three new councils (Cheshire West and Chester, Cheshire East, and Central Bedfordshire), and transferred across the relevant wards from the old county councils – basically a manual process of working out the list of correct ward IDs. april2009-update.sql has the gory SQL details if you care.
WriteToThem was now dealt with, but FixMyStreet needed a little more work. The councils that no longer existed had understandably disappeared from the all reports table, so I had to modify the function that fetches the list of councils to optionally return historical areas so they could be included. And lastly, FixMyStreet needs a way of mapping a point on a map to the relevant council. For this, it needs to know the area covered by a council, which was missing for the new authorities I’d manually created. Thankfully, each of the three new authorities are made up of the areas of either 2 or 3 district councils (e.g. Cheshire East is the area covered by Congleton, Macclesfield, and Crewe and Nantwich), so I just had to write a script that stuck those areas together to create the area of the new council. april2009-construct-new.pl. It all seems to work, and I’m sure our users will be in touch if it doesn’t
So goodbye to Alnwick, Bedfordshire, Berwick-upon-Tweed, Blyth Valley, Bridgnorth, Caradon, Carrick, Castle Morpeth, Cheshire, Chester, Chester-le-Street, Congleton, Crewe and Nantwich, Derwentside, Durham City, Easington, Ellesmere Port and Neston, Kennet, Kerrier, Macclesfield, Mid Bedfordshire, North Cornwall, North Shropshire, North Wiltshire, Oswestry, Penwith, Restormel, Salisbury (which is getting a new town council), Sedgefield, Shrewsbury and Atcham, South Bedfordshire, South Shropshire, Teesdale, Tynedale, Vale Royal, Wansbeck, Wear Valley, and West Wiltshire. RIP.
FixMyStreet has a lot of RSS feeds. There’s one for every one-tier council (170), one for every ward of every one-tier council (another 5,044), two for every two-tier (county and district) council (544), and two for every ward of every two-tier council (20,296) – two per two-tier council because you might want either problems reported to one council of a two-tier set-up in particular, or all reports within the council’s boundary.
Then there’s an RSS feed every 162m across Great Britain in a big grid, returning all reports within a radius of that point, the radius by default being automatically determined by that point’s population density, but customisable to any distance if preferred. That’s, at a very rough approximation assuming Great Britain is a rectangle around its extremities, which it’s not, 19 million RSS feeds, lots of which will admittedly be very similar.
Every single one of those feeds can be subscribed to by email instead if that’s preferable to you, and are all accessible through a simple interface at http://www.fixmystreet.com/alert.
However, none of these RSS feeds was suitable for the person who emailed from a Neighbourhood Watch site and said that all they had was a postcode and they wanted to display a feed of reports from FixMyStreet. Given you could obviously look up a FixMyStreet map by postcode, it did seem odd that I hadn’t used the same code for the RSS feeds. Shortly thereafter, this anomaly was fixed, and if you now go to a URL of the form http://www.fixmystreet.com/rss/pc/postcode you will be redirected to the appropriate local reports feed for that postcode (I could say that adds another 1.7 million RSS feeds to our lot, but given they’re only redirects, that’s not strictly true). And after a couple more emails, I also added pubDate fields to the feeds which should make displaying in date order easier.
It’s great to see our RSS feeds being used by other sites – other examples I’ve recently come across include Brent Council integrating FixMyStreet into their mapping portal (select Streets, then FixMyStreet), or the Albert Square and St Stephen’s Association listing the most recent Stockwell problems in their blog sidebar. If you’ve seen any notable examples, do leave them in the comments.