An academic report reflecting on and reported by the United Nations eGovMon benchmarking practice has appeared from Norway. The paper by Nietzio, Olsen, Eibegger and Snaprud, entitled Accessibility of eGovernment web sites: Towards a collaborative retrofitting approach is not freely available and I wasn’t even able to get to it with my Athens login, however an excellent summary can be found at e-governments.wordpress.com.
Looking back upon my own experiences, the accessibility of a website is frequently determined by the content management system (CMS), which this approach accepts. Regular checking, including benchmarking, can demonstrate where the issues are – which is again in the approach. It is also possible to use automated accessibility checkers and highlight issues on an online forum, again, both part of the approach they document. However, part of the difficulty remains in getting the CMS developer to maintain accessibility within their own application as it is developed, gremlins frequently creep in between major re-writes creating a whole range of issues that, being detached from the end-user , they can never seem to see the reasons for fixing.
Further to their proposals, whilst automated accessibility checkers demonstrate the allegiance of the coding to guidelines, this is a machine view and ultimately it is the end-user who identifies the real level of accessibility. My own parsimonious model of collecting (dis)satisfaction data from the user of government services and employing this to improve services is really the only way to test accessibility, not ignoring the value of maintaining continuous checks on web site data. Some of the issues may not be with the website itself but with the processes that underpin it, making the electronic delivery cumbersome or challenging for a user with a disability.
A nice idea but if any UK authorities are not employing these little aids already, I’d be very surprised. What they need to do is employ the feedback one across all channels and utilise the data.