Stats Is Fun!

So, Francis and Matthew have been working hard to produce the 2005 stats for WriteToThem, which measure, among other things, how responsive MPs are to messages from their constituents. We’ve had a couple of questions about what the quoted confidence intervals for the stats actually mean, so some notes on that and also on conclusions that can be drawn from them:

We get the responsiveness stats by sending each user of WriteToThem a questionnaire two weeks after their message is sent to their MP (or other representative), with a reminder after three weeks. The questionnaire asks them whether they’ve had a response from the MP, and, as a follow-up question, whether this is the first time they’ve ever contacted an elected representative.

Now, an estimate of the MP’s true response rate (that is, the probability that a random letter from a member of the public will receive an answer within 2–3 weeks) is obviously the number of respondents answering “yes” to the first questionnaire question, divided by the total number of responses to the question. (Assuming, that is, that the people who answer the question at all are representative of all the people who write to a given MP, that they answer honestly, etc. etc.) So, we can regard this as being a bit like an opinion poll: we ask a sample of constituents whether they got an answer, and extrapolate to estimate the total response rate.

Unfortunately in many cases we don’t have very many responses to the questionnaire (either because an MP’s constituency isn’t very wired so few constituents used our service, or we didn’t have accurate contact details for a significant part of the year so couldn’t put those people in touch with their MP, or because the MP was recently elected so hasn’t had much time to accumulate data). Over the course of they year we sent MPs about 30,000 messages, giving an average of about fifty per MP, but the number of messages sent to individual MPs varied quite a bit:


mps-nmsgs-hist

Obviously, the fewer messages we send to an MP, the fewer questionnaire responses about their performance we receive, and therefore the less accurate our estimate of how well they perform. To see why this is, consider a simple analogy. Imagine that a particular MP manages to respond to 50% of messages within 2–3 weeks. You can model that with a flip of a coin: for each message, flip the coin, and if it comes up heads, count that as a response received, and if tails, not received. Suppose you do that once: 50% of the time the coin comes up heads (MP did respond), and 50% of times tails (MP did not respond). If there’s only one message, therefore, you will always see a response rate of 0% or of 100%. With two messages there are three possibilities: 25% of the time you’ll get two tails, 50% of the time one head and one tail, and 25% of the time you’ll get two heads. So 50% of the time the estimated responsiveness is correct (1 / 2 = 50%), and 50% of the time it’s wrong. As the number of responses increases, accuracy improves; for instance, for an MP who has a response rate of 50% and 20 questionnaire responses, the probability that we’ll estimate various values for their responsiveness looks like this:

binomial-N=20

—now the probability that our estimate will be exactly correct (10 “yes” answers out of 20 questionnaires) is only 18%, but the probability that we’ll get an answer quite close to the true value (between 8 and 12 “yes”s, or from 40% to 60%) is quite high: about 74%. It’s 95% certain that we’ll see between 6 and 14 “yes”s, corresponding to an estimated response rate between 30% and 70%. To put it another way, if all MPs had twenty questionnaire responses, and all had a responsiveness of 50%, then for about 2.5% of them (16 MPs) we’d estimate that their responsiveness was worse than 30%, and for another 16 that it was better than 70%. The [30%–70%] range is called a “95% confidence interval”, and is an indication of how sure we are about the statistics we are publishing. If you don’t pay attention to the confidence intervals you will get a misleading impression from these statistics—we show the confidence intervals for a reason!

(For a comparison, a commercial opinion poll would typically use a sample of between 1,000 and 2,000 people, giving rather narrower confidence intervals, typically of ±2–3%. We can’t do that for individual MPs because we don’t have enough data, but for the data on the responsiveness of different types of representatives we have lots of data and can produce much more accurate numbers. For instance, our estimate of the overall responsiveness of MPs, which is based on tens of thousands of questionnaire responses, is 63%, and our confidence interval on this is much smaller than 1%.)

I’ll finish with something on the question of which parties’ MPs are best at answering constituents’ correspondence. Here’s a plot of our estimates of the response rates of MPs from some major parties:

parties-responsiveness

(This ignores the Northern Ireland parties because (a) none of them has very many MPs, so we don’t have much data; and (b) I’d run out of colours and wouldn’t like to make an horrific faux pas. I’ve lumped together the Welsh and Scottish nationalists for much the same reasons. Apologies. The way to read the graph, by the way, is to consider each curve as telling you “what fraction of this party’s MPs have a responsiveness equal to or less than the value”.)

So what does this tell us? Tories are better at answering their mail, and everybody else does about equally well. (If you want the gory details, the distributions may be compared using a Kolmogorov-Smirnov test; only the distribution for Conservative MPs differs significantly to those of the other parties.)

2 Comments

  1. Can you do a breakdown of responsiveness for ‘safe’ and ‘marginal’ seats? Or maybe plot responsiveness against size of majority.

    Does the fact Labour have got many more MPs than other parties make a difference? Doesn’t Labour have far more ‘safe’ seats?

  2. It’s a shame there are no details on the quality of reply, rather than just quantity. There are massive differences in the number of letters MP’s receive, this must make a great difference to the results. Most of the MP’s near the top receive less than 50 letters, while those lower down receive much more. I notice David Lepper (who is ranked around just above halfway) in Brighton Pavilion received over 600 letters, most MPs get less than 100 and the nearest to him was Gerald Kaufman on around 200. It seems unfair to judge someone middle of the range to someone at the top when there is such a large discrepancy in numbers. The fact David Lepper gets so much mail could be because he is giving better quality responses.

    I don’t think it is possible to just say, the Tories are better responders, it is much more complicated than that. Also it is interesting to note one of the Tory MPs was caught writing to himself to improve his ratings.