Why I dislike benchmarking

One of the many facets of my Useful Productive Employee function is being the benchmarking contact for the organisation. This means picking up emails from our fellow organisations all around the country, and diverting them to the relevant expert in-house so that far-flung experts can tell each other about developments in the field. The requests can be quite interesting, if you’re a bit of an [insert industry name]-geek, but it remains a task that leaves me feeling a bit “meh”.

I know I am biased, and only therefore noticing things through my own rather crude filter of value vs. non-value activity, but benchmarking is one of those things that seems to me undeserving of its apparent reputation as a deliverer of mega-efficient innovation and creativity. Many/most benchmarking requests are nuanced with tones of “what’s your solution about x, as we have to find a solution too and maybe we can copy you”, underpinned by humourous jolly comments about “why reinvent the wheel, ho ho!”. It’s all done in the best possible taste and with good intentions, but, sadly, good intentions do not make meaningful and stickable improvement.

Now I do appreciate that asking around for what others are doing is at least a fraction better than jumping (brown) nose first on the boss’s Great New Idea and implementing it without question, but there is still something quite fundamental missing from the whole ethos of benchmarking:  It doesn’t ask Why, of anyone! Why do you believe that x is a problem? Why did you decide to go for solution c instead of solution b, where’s the proof that it worked? Why were any changes needed at all? What’s the problem we are actually trying to solve here? HERE, for crying out loud! The answers are on your doorstep!

So in the interests of satisfying my craving to do some demand analysis work (I currently have two such tasks pending but not ready to actually do at the moment, so I am resorting to analysing demand in the unlikeliest of areas), I went through the benchmarking email inbox for the first three months of the year, and found four kinds of request:

  1. What’s your organisational structure/roles/ownership for xyz function?
  2. What is your policy or procedure for doing abc?
  3. What solution will you use for dealing with “z” new situation?
  4. Have you done an evidence based review of practice about “y”?

Unsurprisingly (as I said, I’m biased) the last of these categories had the fewest tallies – just 4 out of the 48 requests I counted. Categories 1 and 2 had about a dozen each, with 3 scoring highest (20). And 2 and 3 started to blur after a while anyway, so I think we have a definite winner: solutions and policies, gimme your solutions, gimme gimme gimme!

The other reason I dislike these seemingly innocuous queries – as well as the “HERE, for crying out loud” reasons above – is that I know what’ll happen to the responses, because I have seen it happen time and again, and it’s basically just human nature. Gah, there’s a sweeping statement that could do with a smidgen of justification if ever I wrote one… let’s see if I can locate any nuggets of truth from the rubble of cognitive bias. Just for fun, I will stereo-type my way through a benchmarking scenario of the worst kind.

Group A: We have a problem, let’s get Person B to look into it.

Person B: Gosh, we do have a problem, that process/paperwork/task has never really worked properly, and now it’s got less people/money/time it’s getting worse. I dunno, maybe change it a bit?

Group A: Great! But how?

Person B: Fear not, my faithful followers, I will carry out an extensive cross-country benchmarking exercise, scientifically analyse the results, and we can then choose the best available option, AND learn from others’ mistakes at the same time. How’s that for Innovation and Creativity?

Group A: Hurrah!

Some time later…

Person B: I have conducted my extensive cross-country benchmarking exercise, scientifically analysed the results, and I have an answer! I recommend that we follow the Borsetshire model, adding a few tweaks from the Camberwick Audit, and discarding the problem features as experienced by Riverseafingal.

Group A: Will it work here?

Person B: Of course! I have kept the Camberwick Auditors abreast of my findings and they are 100% behind us on this one so long as I do some consultation, which means bringing you this report, so if you read it and agree with it, that means it is practically guaranteed to work here as well!

And the moral of this story is…

  1. people follow people follow plausible ideas.
  2. people seem to prefer to follow people from other places they don’t know, rather than find out stuff for real for themselves.
Advertisements

8 thoughts on “Why I dislike benchmarking

  1. Have a read of the executive summary from this: https://www.innovateuk.org/documents/1524978/2138994/Solutions+for+Cities+-+An+analysis+of+the+Feasibility+Studies+from+the+Future+Cities+Demonstrator+Programme/5d8ad270-4623-4057-a0e8-2e303033122f

    It’s a review of 30 or so expressions of interest from local authorities to a Technology Strategy Board competition called “Future Cities” – essentially asking cities to submit ideas to solve future urban problems, and the chance to get £50,000 to develop those ideas further and a chance at a mega prize of £28m to implement them fully.

    Sounds great. But lo and behold, “the majority of cities developed similar solutions”. How can this be? Were the people who put the bids together largely following fad, fashion and prejudice rather than evidence? Were they copying each other as a result of the “benchmarking” kind of work that goes on?

  2. God I hate that “why re-invent the wheel” bollocks.
    I’d never PROPERLY noticed before the mock humour its said in.
    Its HATEFULLY ignorant.
    “So why employ you then?” should be my answer.

  3. I disagree*.

    I run a benchmark, so I may just be in denial or our particular benchmark might be a totally different kettle of fish. But I don’t think so – we put it together with CIPFA who have been running benchmark clubs for a few decades.

    The bit where I think my world and yours diverge is this ” I will carry out an extensive cross-country benchmarking exercise, scientifically analyse the results, and we can then choose the best available option”

    This is not how benchmarking works. It doesn’t recognise “options”. We don’t do much more than try to bring together as a snapshot the inputs (work), outputs (results) and costs (in time and money) of lots of councils. It is an aid to understanding what’s happening, but is quite a rubbish way of explaining why it’s happening. It’s a management account with bells on. It can show you areas where you look cheap / expensive / slightly out of kilter with your peers.
    It takes a whole other chunk of thinking to understand what it means, and what (if anything) you should do about it.

    Anyone that says benchmarking is “a deliverer of mega-efficient innovation and creativity” is either bonkers or tilting at windmills. Rather benchmarking is an excellent beginning to creating an environment for thinking about improvement. And yes, differences in results can be useful can-opening conversations between councils.

    * I’m not sure I like benchmarking, so perhaps I don’t really disagree. But I dislike it for possibly entirely different reasons to you.

    • I wouldn’t consign the entire concept of benchmarking to the dustbin of uselessness – it’s just a tool, and like most tools, can be used skillfully, or lazily (like I described in my stereotyped example – with “scientific analysis” being nothing of the sort). I see more of the latter use, therefore for the present time I stand by not liking it.

  4. i would. i would stick it in the dustbin of uselessness and stamp the lid down firmly.
    comparing against anything other than purpose drives the wrong thinking. In most council services, that don’t have measures of customer purpose but of activity, or nationally set ding-dongs, they’d be comparing numbers that tell them nothing of how their service actually IS.
    SO what if it is good or bad compared with other services who also have measures that dont measure what matters. You’re still not understanding what is happening in your service or why.
    The only thing that matters is understanding your own system. Other peoples, are other peoples. They may not have the right measures, or they may for THEIR own system.

Replies welcomed - please add yours here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s