Twenty years ago, a business case was required for the purchase of a single PC.  Ten years ago, internet access was through a modem attached to a computer in a small locked room at the end of the corridor, with a book kept beside it to record sites visited.  The past gets strange and distant very quickly.

At some point in that process, there was a reversal of expectations and experience.  Technology and its applications in the work environment got overtaken by technology and its applications at home.  It happened at different times for different people and different working environments, but it must now be true that a high proportion of office employees can do more with their own technology than with their employers’.

Increasingly, though, the disparity is socially driven, not technically driven.  The constraints are less about what the technology can inherently do and more about where the user is allowed to go.  As Techdirt recently reported:

Consulting firm Challenger, Gray & Christmas has come out with a study this week claiming that nearly one in four companies blocks employee access to social networks like Facebook and MySpace … [T]here are companies who probably ban Facebook at work — just like in the early days of the telephone there were those that banned telephones at work, and, more recently there were companies that banned email or the internet at work. Eventually, companies recognize that fearing communication tools tends to backfire. Embracing them tends to be a lot more productive.

So how do new tools and applications permeate the boundaries which organisations erect around themselves?  Ars Technica has explanations based on culture, economics and the decentralisation of technology.  They conclude on the first of those:

The end result is that consumers bring to the office the expectations that they’ve developed through their interaction with consumer hardware, and in most cases those expectations are frustrated by the reality of corporate IT. This phenomenon is also at work on the network, where users develop their sense of how networked apps (messaging, collaboration, and archival) should look and function through daily contact with the lively ecosystem of consumer-driven Web 2.0 applications. Next to something like Facebook or Google Maps, most corporate intranets have an almost Soviet-like air of decrepit futility, like they’re someone’s lame attempt to imitate for a captive audience what’s available on the open market.

But the Ars analysis makes it all seem fairly logical, though my experience suggests that that is often far from the case. Ostensible reasons for constraints collide with actual reasons to create a pile up which is hard to disentangle, and so hard to move on from.  Part of that is about changed expectations about what is normal – email can be uncomfortable and slightly threatening, a core working tool, or frustratingly old fashioned, depending on the perception of user, rather than on anything about the technology.  Those perceptions of users then run up against perceptions of the organisation:  which uses of the system are safe, which are insecure, which are frivolous, which will at some point fall into (or are already in) the category of things which new entrants to the organisation will simply take or granted.  Getting that right entails separating out two elements of trust which are often run together – trust to act sensibly and trust to act responsibly.

The first is about how people manage their time and work:  if we unblock Facebook, does organisational productivity plummet?  There is a simple answer, favoured by some, that anybody determined to be unproductive will find ways of being so regardless of whether they have access to social media or not.  I don’t think that’s compelling either way in itself.  The more pertinent point from an organisational point of view is whether communication literacy is complete without them, and even if it is now, for how much longer.  That is – or should be – about how communication happens within the organisation and its customers.  The real need is to find a way of trusting people to behave appropriately in an environment with which many of the people who would have to do the trusting are deeply unfamiliar.

The second is about personal hygiene:  if we unblock Facebook, will users do stupid things which put data and systems at risk?  That seems to me to be both easier and harder to deal with.  The easy bit should be to be clear about the difference between real risks and the appearance of risk and not to default to locking down anything which looks even vaguely like the second.  The hard bits come from the twin facts that there are bad guys out there and that the manipulation of social norms is one of their key techniques.  We know that there is a market for the criminal misuse of our customers’ data, we have not the slightest desire to add to its supply.  Brendan Peat, one of Don Tapsott’s team at Wikinomics expresses the strong view of relying on trust to move things forward:

My feeling is that in moving beyond the current model for information security is going to take a little bit of technology and a lot of trust. Web 2.0 tools and the Net Generation will both be additional factors that push the issue to the forefront at leading organizations. Companies will need to move to a model of ‘decentralized security’, which I see as basically allowing users to manage their own security permissions. Organizations will first start experimenting with information inside the firewall, but eventually they will need to evolve and extend beyond the walls of the enterprise.

The problem is that organizations today need to be agile, reconfigurable, be able to leverage partners and third party expertise. Unfortunately to operate in this new environment security and permissions need to be dynamic and flexible both internally and externally. To become a next generation enterprise it will be increasingly important to both empower and trust employees when it comes to information and security decisions.

That’s all very well in principle.  And I can see how it might work in organisations which are both (a) relatively small and highly cohesive (so that there’s plenty of offline trust to start with) and (b) hold information which is not, in general, specifically attractive to criminal outsiders.  We seem a long way, though, from a state where we could be confident that individual users would make the right trade offs (it sometimes feels as though we are still a long way from the people whose job this is making the right trade offs).  Saying that we need to empower and trust individuals is fine in principle, but it serves only to take us on to the next question: empowered and trusted to do what with whom in what circumstances.  There can be principles for that, but the attempts I have seen fall into a common trap with guidance:  they look helpful when you don’t have a problem and don’t need guidance, but fail miserably just at the point where you are sufficiently uncertain to turn to the guidance.  So we come back again to trust:  what are the steps we need to take which will get us to enough people being comfortable that greater openness based on a wider range of tools is the right way forward?

Responses

  1. The thing about trust is that it doesn’t emerge, full grown, from the beginning. It grows and it feeds off failure. People learn how to trust not because there are guidelines but because they interact with other people and do it better each time.
    If we’re saying that, in the end, the public sector simply can’t take part in that process, then I think we’re in trouble.
    The trouble with that prescription is that it can be construed as reckless and naive. I don’t belive it is. It seems to me that government is facing a difficult time as it confronts the opportunities of this new age of connectedness whose benefits won’t come without some vision and courage, as well as well as lashings of pragmatic experimentation and incremental learning.

Comments are closed.