A couple of weeks ago, I wrote about some barriers across a footpath as a simple illustration of how easy it is to skew public decision making if the question is defined too narrowly.  Since then I have come across a number of things which add up to much clearer thinking on this than I managed then.  

The first was a piece by Mike Masnick on the not obviously similar question of whether governments should have the ability to tap internet-based phone calls in the same way that they have long been able to do for POTS (Plain Old Telephone Service – excuse for gratuitous use of one of my favourite abbreviations).  In the POTS world, it’s easy because there are telephone exchanges.  In the Skype world it’s impossible:  

Calls are encrypted end-to-end, meaning that only the end users who are parties to a call hold the secret keys to secure the conversation against online snoops. There’s no device Skype can install at their headquarters that would let them provide police with access to the unencrypted communications; to comply with such a mandate, they’d have to wholly redesign the network along a more centralized model, rendering it less flexible, adaptable, and reliable as well as less secure.  

So the question becomes how far it is appropriate to require Skype to change its business model and technical architecture for its many millions of customers in order to allow the interception of the conversations of what is presumably a very small proportion of those customers. It is clearly possible to argue the point either way, but the point here is not to resolve, or even engage with, that argument, but to draw out the nature of the trade off being made.  

Now here’s another example, this time from Cormac Herley of Microsoft Research in a paper with the splendid title, So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users [pdf]

We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot.  

The argument is essentially that looked at at system level, the aggregate costs to users of complying with security requirements may well outweigh the aggregate benefit to them of doing so, and that therefore non-compliance is a rational response.  As Herley stresses later in the paper: 

While we argue that it is rational for users to ignore security advice this does not mean that the advice is bad. In fact much, or even most of it is beneficial. It’s better for users to have strong passwords than weak ones, to change them often, and to have a different one for each account. That there is benefit is not in question. However, there is also cost, in the form of user effort. In equilibrium, the benefit, to the user population, is balanced against the cost, to the user population. If observed user behavior forms the scales, then the decision has been unambiguous: users have decided that the cost is far too great for the benefit offered. If we want a different outcome we have to offer a better tradeoff. 

It follows that understanding the cost of compliance is essential to understanding the net value of the policy.  Herley does a rough and ready calculation of the cost in the USA from multiplying the number of users by an hourly value of their time: 

This places things in an entirely new light. We suggest that the main reason security advice is ignored is that it makes an enormous miscalculation: it treats as free a resource that is actually worth $2.6 billion an hour […] 

When we ignore the costs of security advice we treat the user’s attention and effort as an unlimited resource. Advice, policies and mandates thus proliferate. Each individual piece of advice may carry benefit, but the burden is cumulative. Just as villagers will overgraze a commonly held pasture, advice-givers and policy-mandaters demand far more effort than any user can give. 

Then finally, along comes Paul Clarke being authenticated over the phone by an insurance company.  It does not go well

Match the process to the risk. That’s all I ask, as a process rationalist. It works. The one really gold-standard online transaction that government offers – the tax disc – works so beautifully because just such a risk-based decision was made. You don’t have to exhaustively prove that you are the person connected to the licence reminder or the car. You just have to have the reference number in your hand, and a means of payment. 

Paul has put his finger on something important there.  I too have found over the years – rather to my disappointment – that nobody has tried to pay my bills fraudulently. I remember arguing when plans were first being made to put VAT online that the point at which strong verification was needed was the point at which a trader was applying for a refund.  At that stage with very low levels  of adoption, putting equivalent obstacles in the way of somebody trying to make a payment (this was at a time when companies were being encouraged to buy digital certificates at £50 a time) didn’t make a great deal of sense. 

But it is a comment to Paul’s post from Adrian Short which sums all this up in the neatest form I have seen: 

Firstly, you need to get the balance right between having false positives (letting the wrong people in) and false negatives (keeping the right people out). Where that line is drawn very much depends on the underlying value of the data/transaction. 

Secondly, you must acknowledge that your security measures have a cost both for the organisation and its customers. This cost must be offset against the value of the transaction, including the cost as described above that legitimate customers may not be able to complete the transaction at all.

There is a “security” mentality that says that every process should have as much security as possible, whereas it should actually have as little security as necessary. Good security is proportionate and as far as possible, unobtrusive. 

All those examples – except the pavement barrier I started with – are about security in one way or another.  That’s not an accident, but it’s not the complete story either.   The real point is that missing the balance of costs and benefits in the widest sense leads to skewed decision making and it applies to every aspect of service design.  The reason why security issues so often come up as examples is, I suspect, not because that basic principle operates any differently, but because in a wide range of organisations and services in both private and public sectors, security is applied as an overlay from a perspective which, as Cormac Herley observed, tends to see the benefits of greater security more clearly than the costs.  I am definitely not arguing that we should ignore or neglect security: money and personal data are valuable commodities which attract serious criminal interest and it would be complete folly not to have appropriate defences in place.

But the basic point remains the same:  the costs of design decisions need to be understood as clearly as the benefits.  And if the costs fall externally while the benefits are felt internally, there is no incentive to reduce the costs and a continuing risk that the balance will be struck inappopriately. Managing that risk is an important job for any service designer.