World Wide Web Consortium

February 18, 2009

If you thought nothing much went on at W3C in relation to e-government you’d be sadly wrong. Churning away in the background is the e-government Interest Group which has produced amongst other things a list of up-to-date international reports relating to e-government available at:

http://www.w3.org/2007/eGov/IG/wiki/Reports

Another very recent report is the 240 page one done by Deloittes for the EU – STUDY ON USER SATISFACTION AND IMPACT IN EU27

The report is basically the preparation for a more detailed study but is testing the instruments (i.e.surveys) to be employed in the bigger exercise. Perhaps, unsurprisingly, the main outcome is that  home users lag behind business users, along with the fact that measuring ‘satisfaction’ is not straightforward, perhaps one of the reasons I’ve started looking towards collecting dissatisfaction.


Customer First!

July 29, 2008

The North East echoes my initial findings! A new report written by Aperia, who did the work with Chorley/NWEGG on citizen need, has been produced on behalf of the Customer First Network of the North East England local authority customer service managers under the auspices of the North East Improvement & Efficiency Programme. It examines NI14 and contact management, with some sound advice!

Amongst the quotations I savour are:

regarding NI14 – “our view is that this figure is, in fact, of little value (but the Minister wanted one!)” – page 10

Page 11 – “Additionally, there are indications that it is important for public sector contact centre staff to increase their ratio of time spent with the customer as a percentage of time worked. There is emerging thinking that demonstrates that private sector call centre staff spend something approaching 40% of their working time with the customer whereas public sector equivalents spend less than 20% of their time with the customer. To drive these improvements it will be critical for contact centres to be recording a recognised measure of productivity – and what is more a measure that will have to be accepted by employees as an indicator that they can directly affect.”

Page 11 – “Attempts to address uniformity of customer satisfaction indices have been tried and failed on many occasions.”

Page 12 – “Potential measures for quality include:-

Overall Satisfaction – percentage of surveyed customer respondents expressing overall satisfaction with the services received to determine the percentage of customers who are satisfied overall with services provided by the organisation

Engagement with the improvement process – percentage of customers (broken down by customer type) identifying ways to improve service delivery to determine the level of customer engagement with service improvement.”

Page 13 – “Corporate systems should be in place to help measure customer satisfaction – the key quality criteria for any customer focused organisation. These should be multi-channel and configured in accordance with the available common languages (controlled lists) that describe local government services.

Page 14 – “Measuring usage of public services across all primary channels for that service is critical.”

Page 15 – “There are no council’s (who responded) who have a fully holistic approach to managing access channels for local services. Customer Services as organisational units tend to be limited to telephone and face to face contact with little, if any, control over the web and white post channels or other lower volume channels. Corporate responsibility for face-to-face remains isolated to one-stop-shops, rather than more broadly applied to all face to face interaction.”

All-in all a useful document!


Satisfaction? Responding to Pete…

April 28, 2008

Thanks Pete – my personal response beloe yours but I’m not saying the final one!

“Is “satisfied”, a satisfactory service, the limit of our ambition? Of course it may be a challenge even to get to that, but it seems to me you would ideally want to get beyond it, to something like excellent or “delighted”.

I have (once!) been asked to rate my satisfaction with a service from 1 (bad) to 10 (good), and then asked “if we didn’t score 10, what would it take to get us there?”. That maybe sends a more positive message than “if you’re just about satisfied, that’s good enough for us”. It might also throw up some new ideas for improvement.”

That’s what I started this blog for! According to some recent marketing literature, its all to personal. Offering people 1 – 10, 1 – 7 or  1 – 5 makes no difference according to the analysis, so which scale? My concept was, and I don’t claim originality, I’m just seeking adoption of it, is to leave it as either satisfied or dissatisfied (this too has papers supporting it) and ask for the reasons behind the choice. Nice and simple. Another academic paper states that offering different people the choice between 1 and 10 for the same service you’ll get different scores, its all very human, so again get the reasoning, if possible.
There’s also another angle in that concentrating on the one’s dissatisfied with a service is better practice than getting satisfied service users up to excellent. I know ideally we’d like all our users to consider us excellent but whilst we have some that aren’t satisfied, lets sort them out – there’s another marketing paper on that, too!

“And about steering the public to cheaper channels – surely you need an element of that? Improvements don’t usually come for free, and budgets are finite, so if you don’t realise some savings, sooner or later you have to settle for something less than excellence.”

Agreed, but there is a temptation to push rather than pull. We’ve all seen what’s been happening to banks and now they’re changing back. Many council services can actually require a mix of channels to be used. First of all a visit, then a few emails or maybe a ‘phone call, then a visit, then an email. The great British public in most cases will use the most convenient once they have confidence in our ability to deliver good services that way, what we need to do is develop their confidence in using those channels. But there will always be some people who can’t or won’t use the Internet or ‘phone and we still need to provide a quality service, probably mediated by humans but facilitated by the Internet to them?

What do YOU think?

When seeing John Seddon last week he mentioned another book of his from 1992 with regards to satisfaction and measures “I want you to cheat”, which I found second-hand on Amazon and am reading that plus my usual diet of journals before “Systems Thinking in the Public Sector”. Its a smaller, lighter read, looking at the private sector.


Annual research report…

April 19, 2008

Having reached the formal first anniversary of my research ( I’d actually been thinking about it a lot longer, then started it and got delayed by a spell of long-term illnes with heart failure), I thought I’d explain the proposed model and why.

My review of the literature had only revealed some complicated metrics as listed by Andrea di Maio and others, and targets such as National Indicator NI14. Along with that, having been managing an e-government programme I despised the Best Value Performance Indicator 157 and Priority Service Outcomes that had been the targets in England, they were of little value to the public!

I’d picked up the feelings that more recent reviews, such as the Irish lesson pointed to measures and also that customer satisfaction had a big role to play.

Companies like rol have proposed solutions such as govmetric and I think they’re getting there. My approach is to inhibit the use of targets and a pure reliance upon positive or negative feedback is the answer, and what I want from my suggested model.

If a customer/citizen (and there is a lot of debate about how we should view them, including the Cornford/Richter one) is satisfied or otherwise they indicate and leave feedback as to why. The feedback is used to improve the systems…

Simply that – satisfied/dissatisfied – if so, why? We’ll do something about it! Of course, we still need to measure usage of channels, all the channels, but with usage and satisfaction a great deal can be done to improve the service.

The next development is to cater for the customers that aren’t banging upon the door of any channels. Need, as per NWEGG, is one way but improving and simplifying access may stop us pushing the door from the other side?


Great E-mancipator survey 1/2008 as a PDF

April 5, 2008

For those who don’t like electronic surveys, I’ve saved it as a PDF for mailing to me at the University…

 

great-e-mancipator-questionnaire-1-2008


Public Value, Social Capital and other fun metrics

March 15, 2008

In the course of my study of the literature I have had to consider some other ways of measuring the value of electronic government. One of the terms that has been used in recent years is Public Value and a document on the Cabinet Office web site by Kelly, Mulgan & Muers provides some background to this:

http://www.cabinetoffice.gov.uk/upload/assets/www.cabinetoffice.gov.uk/strategy/public_value2.pdf

Another term is Social Capital, but this is possibly even more difficult to measure, as a review of the literature by the Office of National Statistics demonstrates:

http://www.statistics.gov.uk/socialcapital/downloads/soccaplitreview.pdf

Hence my reason for wanting to examine ‘satisfaction’  versus ‘dissatisfaction’, with explanatory comments for success or failure in service delivery and hence as a means of improving channels!


Follow

Get every new post delivered to your Inbox.

Join 39 other followers