Who is doing what at the moment in local government? Joined-up research…

October 26, 2008

I had been in touch with Brendan McCarron of the CIPFA Performance Improvement Network in the past and it had been he that had pointed me to the paper on the Scottish Accounts Commission on gap analysis that I’ve mentioned before:

He has been working with Simon Speller, councillor and academic (who was referenced in the aforementioned report) on customer satisfaction in a series of works for the CIPFA Performance Improvement Network – Improving Customer Satisfaction.

On the 9th October I gave a short presentation in Preston on an academic view of customer satisfaction for the ESD-Toolkit group looking at Customer Insight which is related to the one on profiling. This also provided some feedback to my research – ESD-Toolkit – Customer profiling & satisfaction

On November 11th I am presenting another academic view at the EiP conference in London, the EiP Group is looking at Customer Insight, Citizen Engagement and Change In Local Government

How many more networks are there? I’ve also been involved with Socitm‘s discussions around metrics and these overlap with the ESD-Toolkit since both employ GovMetric, a couple of whose staff I had conversations with at the outset of my research.

Are we all talking to each other folks or are you relying upon me talking to you?

IDeA NI14 Guidance and GovMetric

August 2, 2008

Public Sector Forums have made a great deal of the fact that the IDeA guidance upon NI14 promoted GovMetric and only GovMetric as a possible solution.

I’ll declare some interests here, I have met wil rol the company that produce GovMetric and over a year ago had an academic discussion with them about the while concept of customer satisfaction and channel migration.The council I work for currently employs the Socitm solution for doing web site evaluation which partialy employs a tool produced by rol, who are working with Socitm to do service benchmarking. I am also a Socitm member, a member of my regional Socitm executive and also on the Local Government Chief Information Officer Council, which Socitm were recruited by central government to create.

I like the concept of GovMetric and haven’t seen anything other than built in CRM tools to match it and of course they don’t all come with the templates for web sites or a complete and designed-for-purpose suite of tools. There is Opinion-8, which I believe doesn’t work quite the same way either.

I’ll agree that it was daft for the IDeA to nominate one tool, I don’t think they could have avoided promoting the ESD-Toolkit, since its their child! However, I have yet to find anything conceptually up to GovMetric. We asked our web developers to build a tool into the web site CSS to collect feedback and they wanted a lot of money, it would probably have contributing to buying GovMetric, which isn’t cheap, and tying up the other channels!

What’s the solution? Horses for courses, I suspect, by the time people get around to trying to collect NI14 data manually they’ll realise what a time waster it is and plump for an electronic tool. What is needed in collecting the data is rigour and an awareness that NI14 is not the answer, the answer is feedback from staff and citizens about the systems we use, be they delivering answers by the web, telephone or face-to-face. We need to collect that feedback and act upon it but at the same time supply the required indicator.

Why do we need to do that? To instil confidence in the public that we mean to change, to transform. We do mean to do this, of course, but we need to demonstrate it! We also need to placate the Minister!


June 28, 2008

My concerns about benchmarking, targets and related matters, whilst not universal appears to have some adherants! During the last week have discussed it amongst colleagues at Socitm (Yorkshire & Humber) and with Paul Canning and Public Sector Web Managers Group.

I also discoved a paper from the U.S. General Services Administration – Improving Citizen Customer Service V 1.0, which also supports my theory and also uses the term ‘yardstick’ which I think is a much better term when dealing with purely internal metrics as opposed to (possible) target setting. If you don’t want to read it all, just focus on chapters 5 and 6.

Four of the eight guidelines in the conclusions are:

“A quantitative “value” for citizen satisfaction can be used as a yardstick for trends. This value can be defined in various ways. Agencies can track the percentage of citizens who expressed complete satisfaction with their contact or use a scoring system defined internally or by a third party.

Qualitative satisfaction questions and information will help agencies analyze citizens’ expectations and areas in which they are not meeting those expectations.

Quantitative (and to some extent qualitative) satisfaction data should be used to examine the correlation between the performance metrics and benchmarks used in this document and citizen satisfaction. For example, if improving average handle times at an agency is not resulting in an increase in satisfaction scores, the agency’s time and effort is better spent elsewhere in the service environment.

Surveys can be conducted at the end of a contact or within a reasonable timeframe after
the interaction.”

and also states:

“Performance metrics described in this document are only effective if they are captured, reported and analyzed in a timely manner and reach the right decision maker. Also, metrics should be used not in isolation but in the context of a strategy and methodology.”

Of course I’m not arguing to import this wholeheartedly from the USA, if one reads the document it is still rather onerous for a small organisation but data integration and analyis or Extraction, Transformation and Loading (ETL) can be done – if only GovMetric weren’t so expensive ! It’d blow NI14 into last year…

Targets, metrics and dissatisfaction

May 17, 2008

I have recently held an email converstaion with an experienced government auditor who stated that –

Business process improvement methodologies (Lean, Six Sigma) are heavily reliant on metrics / measures. 

Unfortunately, the data councils collect for their performance indicators is usually not adequate not robust enough for meaningful improvement initiatives. This is because:

·        it is reported to the government and to regulators and this leads to ‘gaming’ behaviours

·        it is centrally imposed and does not aid local management

·        the data is manipulated and averaged so the ‘voice of the process’ cannot be seen

There is little understanding of the theory of variation and its importance in process improvement. Most councils’ performance compare 2 or 3 points in a chart and conclude that things are getting better or getting worse – this is flawed as you are probably aware.

My response was that during my literature review I came to a conclusion that fits nicely with lean but not well with PI’s, KPI’s etc that disatisfaction was the true ‘measure’. This was because the very personal nature of responses to things on a rikert scale are subjective, as is satisfaction as a whole, but if one can get people to tick a box that they are dissatisfied and explain why they are, you are gaining the ‘voice’ response rather than the ‘exit’ one employed in Hirschman in his development of game theory around East German politics. I’ve no proof that this is true. but the work that rol have done with GovMetric in using a slightly broader brush of satisfied/neither/dissatisfied should provide some guidance (and I am in touch with them, although my council is currently not a user of anything).

My proposal, which is really of no benefit as a government metric, unless one counts the numbers and compare them with the whole, does tend to get around some of the issues mentioned. It can also help assist channel migration in the context that the same reporting factor needs to be used across all channels and identify any particular failures in channels i.e. ‘phone system failures, too complex web site, stroppy reception etc.


Re: Pete but not a repeat!

April 29, 2008

What a coincidence, just as Pete suggests a pilot, the following report gets published, which actually suggests doing something but wouldn’t it be nice if they piloted it for the rest of us or ran a couple of differing pilots and took up the most successful.

Parliament has today published the Committee of Public Accounts  report on Government on the Internet: Progress in delivering information and services online (16th) HC143

A fascinating document, especially if one reads the committee proceeds at the back!
Two of the key conclusions, from my view –

16% of government organisations have no data about how their websites are being used, inhibiting website developments.

The Government does not know how much it is saving through internet services, nor whethere any savings are being redeployed to improve services for people who do or cannot use the internet.

The Committee has requested improvements for the future but as the dialogue in the committee infers but not the recomendations, these need to bear in mind all channels.

But I’m not too concerned about satisfied customers, although that’s very nice, my thinking (as opposed to the perverse NI14 imagination) is to pick up the disatisfied, the unhappy, the failed customers and find out what pi**ed them off? Great, count the postives as well to get some sort of scale but concentrate on correcting the issues – a rotten ‘phone system, poor navigation on the web site or unhelpful opening times for the reception could affect a channel and if its a lower costing one, cost the council money.

Some of this type of work is being done by Govmetric (rol) as one example and one style, but the MP’s have now picked up on lack of metrics but lets not spend a lot of money doing something silly, lets pilot some well researched, straightforward methods>

Annual research report…

April 19, 2008

Having reached the formal first anniversary of my research ( I’d actually been thinking about it a lot longer, then started it and got delayed by a spell of long-term illnes with heart failure), I thought I’d explain the proposed model and why.

My review of the literature had only revealed some complicated metrics as listed by Andrea di Maio and others, and targets such as National Indicator NI14. Along with that, having been managing an e-government programme I despised the Best Value Performance Indicator 157 and Priority Service Outcomes that had been the targets in England, they were of little value to the public!

I’d picked up the feelings that more recent reviews, such as the Irish lesson pointed to measures and also that customer satisfaction had a big role to play.

Companies like rol have proposed solutions such as govmetric and I think they’re getting there. My approach is to inhibit the use of targets and a pure reliance upon positive or negative feedback is the answer, and what I want from my suggested model.

If a customer/citizen (and there is a lot of debate about how we should view them, including the Cornford/Richter one) is satisfied or otherwise they indicate and leave feedback as to why. The feedback is used to improve the systems…

Simply that – satisfied/dissatisfied – if so, why? We’ll do something about it! Of course, we still need to measure usage of channels, all the channels, but with usage and satisfaction a great deal can be done to improve the service.

The next development is to cater for the customers that aren’t banging upon the door of any channels. Need, as per NWEGG, is one way but improving and simplifying access may stop us pushing the door from the other side?