In terms of electronic government email is normally the poor, ignored, unmeasured relation. However, when a citizen sends an email, for example if there’s no form on the website and only a generic, or even very generic, email address, they are still expecting a response, and the fact that they know from experience that email delivery is near instantaneous means they expect a pretty quick response. This has always been the case but the reality is largely ignored. Some local authorities ping back a response that the email has been received and a reply will be forthcoming in a number of days – some even do this for personally addressed mails.
The other area for concern is how many government services count the number of mails received and analyse them in any way. Do we know if particular services get more than expected? Perhaps somebody has unhelpful information on a website requiring users to asks questions every time? Perhaps a form isn’t available that could be delivered to the right service. How do we know any of this whilst hundreds or thousands of emails are manually forwarded, whilst remaining uncounted and unanalysed? How do we determine the response expected, if we send a standard seven-day auto-answer and the sender has placed a ‘High Importance’ flag on it, how will they feel?
An interesting piece of work, complete with infographics has been published on Codeworks Connect entitled ‘Interesting Insights into Email Infographic’, which helps set the scene on user behaviour around email. Whilst not a believer in the adage that we can’t manage what we can’t measure, I do believe that in order to deliver channel shift and improve service delivery we need to know what is happening on all channels and leaving poor relations like email or telephony out in the cold isn’t good enough.