Nuclear catastrophes, speeding tickets and agility

How do you stop your stock of nuclear weapons accidentally blowing up the world? How do you devise a straightforward system for recording penalties on driving licences? Those sound like very different questions, but they turn out to have some unexpected similarities.

Let’s start with nuclear weapons. Eric Schlosser has written an extraordinary book about the command and control systems for the US nuclear arsenal. He describes the deep structural weaknesses in those systems, illustrated with a seemingly endless stream of examples of how those weaknesses came close to causing disaster, for reasons ranging from operational carelessness to fundamental design flaws, and with potential consequences ranging from contamination to conflagration.

The book is well worth reading but you can also watch Schlosser speaking at a recent RSA event in this video (the whole thing is almost an hour, but the meat starts at 3:30 and runs for about 20 minutes, the rest is Q&A – or there is a shorter version here).

One of the themes which came through very strongly from the book was the importance of maintenance and improvements. It turned out to be relatively easy to get eye-wateringly large budgets for the development and deployment of new weapons and almost impossibly difficult to get any money at all to improve control and safety systems for existing weapons. That’s partly a reflection of a military preference which underplays safety (given the risk of a bomb which doesn’t go off when it should, and one which does go off when it shouldn’t, some may see the first as more important than the second), but it’s also a reflection of a much more general political issue: it’s much more attractive to be responsible for delivering a new thing than for doing maintenance on an old one.

Meanwhile, rather less cataclysmically (except perhaps for him), Matthew Cain got a speeding ticket. He didn’t contest it, paid the fine and accepted the points on his licence. That’s an apparently simple and apparently online transaction which, at the end of a terrific blow by blow account he summarises as involving ‘three public bodies, three different websites, four outbound letters, eight pieces of post in total’.

The problem is not that it’s impossible to go through the process. Indeed, the problem (in this form) only exists because it is possible to complete the transaction: if it weren’t, somebody would fix it. The problem is that it isn’t anybody’s priority (or anybody’s budget) to improve, streamline and integrate the current fragmented process. As Matthew points out, the obstacles to improvement are very real:

1. West Yorkshire Police has higher priorities. I suspect no senior manager will be held accountable for a slow, inefficient money-making service

2. Left to its own devices, West Yorkshire Police would probably redesign the service inefficiently, either relying on contractors to build a unique service or purchasing a proprietary service

3. The opportunities to improve the service are only incremental. West Yorkshire Police could redesign its part of the service but lacks control over payments or licensing issues (and, it appears, speed awareness courses)

4. Probably only the MOJ has the convening power to bring together its payment service, the DVLA’s licencing service and a police force’s processes. But to do so across 42 police forces would be a considerable hassle

5. The current incentives government digital services prize redesigning existing high volume, central government services. The redesign of speeding fines is probably low on attractiveness and achievability for the MOJ — although of all departments it’s probably best placed to make progress

Every description of agile development ends with ‘iterate’, and the GDS service design manual is explicit about what they call the live phase:

[Going live] is not the end of the process. The service should now be improved continuously, based on user feedback, analytics and further research.

You’ll repeat the whole process (discovery, alpha, beta and live) for smaller pieces of work as the service continues running. Find something that needs improvement, research solutions, iterate, release. That should be a constant rhythm for the operating team, and done rapidly.

Those are good principles, but they don’t on their own solve the problem. When budgets are tight, it’s easy enough to slip back to thinking that what is there is is good enough, to finding workarounds rather than solutions, to accepting what works rather than looking for ways to make it better.

It’s not just the money, of course. Perhaps a little counter-intuitively, the opposite can be a problem too. We have all seen systems where features have been added and designs tweaked in ways which reduce utility, rather than adding to it.  Continuous improvement is virtuous if it delivers improvement, not if it is only continuous. Getting – and keeping – people with the right skills and the right attitudes to maintain or enhance a service may be more difficult than assembling the team to build it in the first place.

Nor of course is this just about IT. There are new public buildings which struggle to cover their running costs, new buses designed to run with a second crew member who will increasingly be absent, new phones with patchy network coverage.

But in a sense all of those are consequences of the deeper problem, that the wholly new is grander, more exciting and generally better rewarded. Slowly and painstakingly reducing risk and increasing resilience has much less obvious benefits. Except, perhaps, for avoiding nuclear devastation.


Minimum viable should also be maximum attainable

Last week, in a post ostensibly about fixing printers (so all is forgiven if you glazed over and didn’t read it), I talked about rapid and iterative design, but wondered how minimal a minimum viable product could be:

There is a risk the other way too, though: how good does it have to be to reach the minimum standard? Is anything functional good enough for a first attempt?

That stemmed from a concern that minimum and minimal are too easily confused, particularly for people with a history and culture of development not rooted in agile approaches.

Then today, through a tweet by James Moed, I came across a post by David Aycan with a title which encapsulated my thought much better than I had managed myself: Don’t Let the Minimum Win Over the Viable:

An MVP should be the easiest way to test your hypothesis, but that doesn’t mean that building one is easy. A common mistake is refusing to tackle the tough technical problems that create revolutionary offerings. As Ries writes, some entrepreneurs hear “minimum viable” product as “smallest imaginable” product. This misunderstanding of Lean Startup tenets can have expensive consequences.

As part of my own journey towards understanding and internalising some of these concepts, I started being very uncomfortable with the idea of minimum viability. I find it hard to separate from connotations of the least you can get away with, which in one sense is not the idea at all (though in another sense, it is absolutely the idea, which is perhaps the source of the confusion). So I came up with the idea of the Maximum Attainable Product, not as a way of describing a diferent thing, but as a different way of describing the same thing.

Maximum attainable doesn’t mean building something big and unwieldy. Just as the best camera is the one you have with you, so the best product is the one you can start using. It means making the best and most ambitious thing possible within the discipline and constraints of the development approach being used. It means not compromising on quality and effectiveness while deliberately focusing on a reduced set of scope and needs addressed. In other words, it means (I think) the same as minimum viable.

Now I am not so sure that maximum attainable is the right phrase either. Its connotations of bloat and delay are no more escapable than the opposite connotations of minimum viable.

So I think I am hankering after a way of saying maximally attainable minimum viable product. Or possibly the other way round. Only with fewer words. All suggestions for a more felicitous turn of phrase are very welcome.

Print and be damned

The private sector, as everybody knows, has clear and customer-friendly systems and services, because the customer is king. The public sector, on the other hand, revels in the obscure and incomprehensible because… well, just because.

And so it was with a light heart that I set out to fix a security flaw in my printer (one of those chores which never existed until Moore’s law came along to free us from drudgery).

The rest of this post sets out the process in mind numbing detail, partly as minor therapy for me, but more importantly because as with several recent posts, telling these stories is a powerful way of bringing to life why service design is not just an affectation and because every one of them is a reminder that we are constantly at risk of inflicting this pain unless we are as constantly vigilant in avoiding it. Fixing printers, opening bank accounts and paying for parking permits don’t on the surface seem to have much in common, but in each case the problem was not that the task could not be completed, it was that there was no sign that anybody involved in designing the service had ever systematically worked through what it would be like to do it as an ordinary user.

The blow by blow account which follows is a fairly grim one, but there is some good news. I went through this process and wrote the account back in January but never quite finished it. Quickly checking the links today shows that the process is now not quite as bad as it was then, with broken links fixed, cumbersome steps made slicker and FTP documents converted into web pages (it also reveals that the printer needs updating again, but lets not worry about that for now).

But that in turn throws up an interesting question about agile approaches. Iterate, then iterate again say the GDS design principles:

The best way to build effective services is to start small and iterate wildly. Release Minimum Viable Products early, test them with real users, move from Alpha to Beta to Launch adding features and refinements based on feedback from real users.

I am with them on that, not least because of the risk they identify in the small print of bottleneck by specification. There is a risk the other way too, though: how good does it have to be to reach the minimum standard? Is anything functional good enough for a first attempt? If we are feeling relatively generous, we can set the bar fairly low for publishing a security alert where there may be real urgency (though it’s not as if this was the first time there had ever been a security alert), but there is a cost. If it was bad enough the first time, it doesn’t matter how good it gets by the second or third time, because I won’t be back to see. Not so much of an issue, perhaps, for an obscure bit of technical support, but critically important in trying to establish a customer relationship.

So the thing which is too often missed in the practice, rather than the rhetoric, of minimum viable products (though not, to be clear, by GDS) is that the minimum is about scope more than it is about quality. That’s not to say that further iterations won’t improve quality; done well of course they will. It is that iteration and user feedback are no substitute for not having done a good enough job in the first place.

And so to the quest through a land of magic and adventure…

Continue reading

Two worlds, not quite yet colliding

I went to two events yesterday.

The first was the launch of the Government Digital Service, or rather a housewarming party for their shiny new offices. In fine agile tradition, they put on a slick show and tell with short sharp presentations about their work and achievements topped and tailed by Francis Maude, Mike Bracken (who has blogged his account of the day), Martha Lane Fox (likewise) and Ian Watmore. There was an enthusiastic crowd of supporters twittering furiously and other blog posts are starting to appear [added 10/12 – and GDS has now posted the presentation material and links to press coverage] . The dress code was smart casual, with a lot more emphasis on the casual than the smart. There was a buzz, a sense of creativity and spontaneity, of energy and talent unleashed, of an approach which felt a million miles away from both the stereotype and the reality of government projects.

Frustratingly, I had to leave early to get to the second event.

That was a much more sombre affair, closed and closed in, in an anonymous Treasury meeting room. The programme I work on was being reviewed, to check that we are managing effectively and are on track to deliver. There was little that was casual, in dress or anything else. There were plans, business cases, critical paths, migration strategies, decommissioning strategies, privacy impact assessments and a pile of other stuff besides. There was pointed questioning on risks, affordability and resilience. I make no complaint about that. We are spending public money – rather a lot of public money – and we should be challenged and tested on whether the spending is wise and the results assured. The track record of large government projects is not so great that there is room for complacency. But it felt a very long way from the world of GDS.

That matters, because actually the two are very closely linked. They represent, in effect, different ways of thinking about the same problem, and have roots in some of the same people and ideas. And in recognising that, I suddenly realised that I had rediscovered a thought I had first had at a seminar I went to almost four years ago, where both approaches were represented, each largely talking past the other. Tom Steinberg, who spoke at the point of inflection between the two, memorably started by saying that he completely disagreed with everything which had been said in the first half, and that the solution to the problem of big blundering IT projects was to have small fleet of foot projects, not to find a cure for blundering. I reflected on the apparent tension then as I reflected today:

And then the penny dropped. The apparent gulf between the two parts of the seminar is itself the challenge.

We need to apply two different sets of disciplines (in both senses), in two separate domains:

  • An approach to the customer experience – both offline and online elements – which is flexible and responsive and which maximises its exposure to customer intelligence in order to do that
  • An approach to the supporting processes which is robust, consistent and correctly applies the full set of rules

The collective culture and skills of government are much more geared to the second than the first – and the risk is not just that we don’t do the first as well, but also that we can all too easily fail to spot the need to do it all. The first is where there is the greatest need for change, flexibility and responsiveness – and where tools and approaches are available to deliver that responsiveness. The second requires the hard grind of implementing big robust systems which do the transactional heavy lifting as invisibly as possible.

Of course the distinction isn’t an absolute one, and of course each domain needs to incorporate the key strengths of the other.  But if we confuse them, we are at risk of getting the worst of both worlds.

My view has changed in the four years since then. I no longer think they are two different domains, they are aspects of what should be intrinsic in any approach (though scale and purpose will drive balance and relative importance). But perhaps there is a risk that big projects are still too much trying to learn the lessons of the last decade and too little trying to anticipate the needs of the next. It is no longer enough for systems to work (though they do, of course, absolutely have to work); they must work well, and work well specifically for the people who will use them. Or, as Helen Milner reported Mike Bracken as saying at another event yesterday:

That makes a lot of sense to me, though only if it is understood that in this context function is an integral part of beauty (as Brian Hoadley rightly challenged).

Conversely though, it is not enough to make beautiful things, though, perhaps less obviously but no less necessarily, they do need to be beautfiul. It is essential that they work and work well too.

Looked at one way, the core mission of GDS (and not just GDS) is to make beautiful things which work well. That means some of the values so apparent in the GDS event need to be more obvious in many other aspects of the work of government. We will have made great progress when discussions about projects in anonymous Treasury meeting rooms are more like the world of GDS. But as increasingly function begins to underpin beauty, it may also mean that the palest shadow of the Treasury meeting room also needs to fall across the sunny loft which is GDS.

One of the key tests of the success of GDS will be that when their turn comes to give an account of themselves in that room in Treasury, their approach is recognised and valued – and the work of every other project is being tested against it.  And another key test may be that that room will be a bit less anonymous, with its own wall of post-its and whiteboards.

Pictures by Paul Clarke

Alpha gorilla

Everybody who has had much to do with the development of government web services knows that there have been failures of imagination, failures of bravery, failures of technique and failures to seize opportunities – as well as successes in the teeth of opposition and incomprehension. Few have had the opportunity to start from scratch (though those who have have often made good use of that opportunity). So there are inevitably people who will look with envy at what the alpha gov team has achieved and, just as importantly, what it was given licence to achieve. Relly Annett-Baker caught that sense in her recent post:

The frustrating part is plenty of people before Alphagov could see the problems and probably a good few of the solutions too. They were not able to act on them (and many have privately told us of their struggles). And they probably feel like, well, like how everyone feels when the consultants waltz in and say exactly what you’ve been saying for the last however many months. We have been given the utopian blank slate that others have only dreamed was possible. To those people, I can only say this: we aren’t wasting the opportunity.

But everybody who has had much to do with the government web services also knows the complexity of forces which bear down on creativity and design choices, sometimes from undue caution but at least as often from the fact that genuinely contradictory pressures have to get reconciled.

That’s where it starts to get interesting, because Alpha gov is beginning to find itself in this territory. It has come under criticism for the choices it has made about accessibility, for compromising on its approach to UX and even for the amount of white space frivolously scattered about the site. To my mind much more interestingly, questions are also being raised about its scalability and extensibility. One commenter on alpha gov’s about page puts it this way:

It looks good. Vast improvement on Directgov. Alpha seems like a great way to test and design the public face of e-govt and I’m sure a lot of the comments you get will praise the big leap forward in usability on show here. I hesitate to say this, but that’s the easy bit. Does your remit with Alpha go as far as testing the other side of this – i.e the other end of the transactional processes, within the Departments? It’s just as important that that alpha provides Departments with the flexibility, functionality and autonomy they need to adapt and develop their products and online services quickly, as it is to make sure the public interface works well. I suspect this will be hard though – the barriers will be more cultural and political than technical.

Alpha gov is a proof of concept. But what concept has it proved? That there are more arresting and more user friendly ways of building a government navigation site? Definitely. That starting with what users actually want to do, and then helping them do it is a good and (in this context) radical approach? Assuredly. That this could replace Directgov or become the heart of the single government domain (whatever that is)? Well, no. Not because it is clear that it couldn’t do those things, but because that is not what it has been built to test.

So what, then, is this alpha? Is it an alpha gorilla, asserting dominance and superiority? Or is it alpha software, tentatively tiptoeing into the daylight for a short and critical life before being cast aside?

The name is supposed to connote the second. But because of all the doubts, uncertainties and insecurities described above, some will inevitably hear the first. Tom Loosemore is horrified by that possibility. I don’t have a scintilla of doubt in his good faith but objectively, as the marxists used to say, I think he may be wrong. The purpose of alpha gov is to challenge, to point fingers at the past and so, by implication, at those who have played parts in creating that past. The position it is aspiring to occupy is not some marginal piece of unimportant communication to a group nobody cares about, it is to be the new paradigm for the way the whole of government interacts with its citizens. It is to be the alpha gorilla, even if its chosen weapon is the alpha site. Aspirant alpha gorillas have to fight to establish their position. Some succeed, and dominate the pack (at least until the next aspirant comes along). Some fail, and are ejected. What we are seeing is the beginning of that fight.

I don’t think Tom and the alpha gov team need feel apologetic about that. But equally, I don’t think that most of those involved in creating the set of things alpha gov is there to challenge need to feel guilty or apologetic either. That’s because alpha gov is, in one important sense, a sleight of hand. It is proposing a technical solution to a supposedly technical problem. That’s good, but technology is not, fundamentally, the reason why the government’s web presence is as it is. The real problem is not technology but sociology. To the extent that the structure of government has been designed at all, it has been designed to be delivered in ways which can be managed. Government is not fragmented as an accident but as a way – for a long time the only possible way – of getting things done. One result of that, as I have argued before, is that there is no such thing as the government. The question then becomes, how in a world of rich and complicated public services, detailed legal frameworks (often highly specific to the service they regulate), every conceivable combination of personal characteristics and needs, and long tribal histories we can nevertheless make things better by deploying the new and more powerful tools we now have available.

From that perspective, the primary power of alpha gov is not as a solution, but as a catalyst. It does less to provide answers than those who built it might have hoped or thought. But it does very starkly pose a question and demand an answer. Who chooses to pick up that question and answer it may show who is the real alpha in the pack.

Is this service design thinking?

I went a couple of weeks ago to a fascinating discussion about the nature of service design, organised around a book published last year called This is Service Design Thinking. The two editors of the book were due to lead the session but were at the wrong ends of a skype three way video conference which stuttered into a dalekian half life without really quite making the breakthrough into comprehensibility. After various attempts to rewire, reconfigure and reboot, we gave up and had what turned into a good conversation among the dozen people round the table in London.

I wasn’t taking notes, so I am not even going to attempt to capture the range of the discussion. Instead I want to reflect on one of the topics we touched on and on some subsequent thoughts about the different positions of public and private sector and smaller and larger organisations.

The topic, slightly unexpectedly, was on the question of whether there was really any such thing as service design in the first place. Given the selection bias inherent in a group united round a book on the subject, that felt a little odd, but reflects the novelty and uncertainty of service design as a discipline. As the book brings out, perceptions of service design can strongly depend on the the starting discipline of the person doing the perceiving: anthropologists and marketing people tend to see the world rather differently. Several members of the group questioned whether service design was really user experience design talking itself up. I found that fascinating, because for me one of the attractions of the label is precisely to get away from questions of channel specificity: the service which needs to be designed is the complete set of interactions which take a customer securely, confidently and effectively from arrival to departure, treating that as a form of glorified interface design seems to slightly miss the point. That indeed points to one of my frustrations with my (very fast and superficial) reading of the book, that its case studies start too far in: the focus is very much on how design questions were answered rather than on how and why the questions were formulated – what was it that made the case study clients think that they wanted some service design in the first place? Interestingly, the one exception is the MyPolice case study (even allowing for my bias in its favour), which I suspect has a lot to do with being a self-generated project where service designers and product owners were the same people.

That mirrored the perception by some in the discussion that service design was a fine idea for small scale experiments in the voluntary sector, but wasn’t (or at least wasn’t yet) something which could be offered to a demanding commercial client interested only in immediate increases in sales. The sense seemed to be that it was too nebulous and unproven, and that clients overwhelmingly preferred to buy specific skills in the minimum necessary quantities rather than some more broadly based and perhaps more amorphous service (and worth noting that many of the participants were in the web consultancy business one way or another, with billable hours the only real currency).

There is a common perception in (and of) the public sector that we are constrained by process and caution, insulated from commercial pressures to be innovative and creative, and so compelled to lag behind the cutting edge of the private sector (a pattern perhaps illustrated by no fewer than three recent posts on public sector procurement, all very well worth reading). So it was fascinating for me to see a sort of reciprocal envy, from people in intensely commercial roles and organisations, who saw in the public and voluntary sectors a degree of freedom from very short term financial metrics which could allow more integrated and less hard edged approaches to flourish.

It was an interesting discussion, though not the one I had expected, based on a book which perhaps shares the power and fragility of the concept which is its subject. It is itself, very self-consciously, a piece of service design, using colour, symbols and layout to provide a much richer experience than any more conventional book. That added up to a splendid reminder of how much more powerful and engaging a book can be than any online substitute yet devised – and yet elements of the design self-consciousness were as irritating as others were inspiring. It all felt as though it was trying just a little too hard to be clever, and perhaps that is an indicator of a discipline with huge power and potential, but which has not quite reached the maturity and self-confidence to make best use of them.

There is no such thing as IT

I have cracked the problem of government IT.

A few months ago, I argued that there is no such thing as the government.  Now, in a further breakthrough, I have realised that there is no such thing as IT either. Putting those two thoughts together leads unavoidably to the conclusion that the problem of government IT is not what it appears to be.

The prompt for this realisation is last month’s report from the Institute for Government, System Error, bravely subtitled, ‘Fixing the flaws in government IT’. The first thing to say about it is that it is an excellent piece of work, setting out very clearly some of the problems which beset large IT projects and how those problems would be better avoided, not by sticking ever more rigidly to formal processes which are themselves root causes of the problems, but by switching to radically different approaches to developing and delivering projects.

But the second thing to say about it is that it is deeply flawed in ways which stem from perceiving all of this to be an IT problem in the first place. Grouping issues together because there is a technology dimension is not wrong, but nor is it the only approach and, I would argue very strongly, it is not the most useful approach to addressing the problems the report is seeking to solve. The conclusion I draw from the report’s own arguments is that seeing this as being about fixing the flaws in government IT is itself one of those flaws.

I claim no originality for this thought. I came into the world of e-government just a report was being completed on making government IT projects work better, in response to a perception that too many were failing.  Although the report was called Successful IT, its very first page stated firmly that:

Our most important message is that thinking in terms of ‘IT projects’ is itself a primary source of problems. Delivering IT is only ever part of the implementation of new, more effective, ways of working. The IT has to fit closely, for example, with the demands of the public and the new working practices needed to produce the desired changes. Achieving this requires a clear vision of the context in which IT is being implemented.

A change of approach is needed. Rather than think of IT projects, the public sector needs to think in terms of projects to change the way Government works, of which new IT is an important part. Our recommendations aim to achieve this change.

I don’t for a moment suppose that the authors of System Error would disagree with any of this, but it doesn’t seem to be a central theme of their own thinking.

Instead, they focus on two basic concepts:

  • platform which is essentially about the commoditisation of as much as possible, reinforced by common standards and functions across government; and
  • agile which is about how new systems and services are developed (and so by almost necessary implication, things which are not commodities).

Their analysis of each of them is powerful and persuasive, but I think in the end takes them to some unhelpful conclusions because of a perception that the two are linked by being part of something called IT.  Slightly oddly, the report contains within it the seeds of a very different  – and to my mind much more powerful – understanding of both problems and solutions than it in fact draws from the evidence.

The body of the report opens with a strong attack on traditional waterfall approaches, particularly for the kinds of problems governments often find themselves needing to deal with:

Government is frequently tasked with tackling ‘wicked issues’, such as reducing child poverty or combating antisocial behaviour, where no one can be certain what the best solution is. For these kinds of complex problems, the best way to determine what works can often be to experiment, learn and improve as you go in an evolutionary approach. However, government IT has rarely supported this iterative and exploratory style of developing solutions. When uncertainty is high, ‘locking down’ solutions is exactly the wrong approach and virtually guarantees that a sub-optimal solution will be developed. (p28)

With equal fervour, the following chapter describes a very different approach:

Compared with traditional tools and methods, agile offers a fundamentally different approach to tackling business problems. Where the traditional approach favours complete solutions developed in a linear fashion, agile encourages a modular approach using short iterations to learn and adapt. Instead of trying to lock down requirements and minimise changes up-front, agile encourages continuous experimentation, improvement and reprioritisation. The approach to user involvement also differs fundamentally. The traditional approach favours heavy user engagement up-front to determine and lock down detailed specifications and at the end to test the final product. In contrast, agile embeds users in the process for the duration of the project, making them an integral part of the development team rather than a constituency to be consulted. (p31)

That’s an approach which certainly started off as a method for software development, but that’s not to say that’s the most helpful way to think about it in this context.  The message I get from the report is that IT is too separate from the business and needs to get closer to it.  That’s not wrong, and if it were the only available option, it would be well worth doing.  But at one level, all that’s doing is moving from the picture on the left below to the one on the right.

That’s not a bad change to make, but it doesn’t look very exciting, and it isn’t – and in both versions, both agile and platform stay firmly in the IT circles.

I am increasingly of the view that those pictures embody an unhelpful model of how organisations and services actually work – and obscure a fundamental division of labour within the IT function to boot. Neither ‘agile’ nor ‘platform’ happens in isolation in the IT organisation, or rather they do, but that’s a problem to be addressed, not a state of affairs to be emulated. Instead, those words each describes part of what, typically, happens across the organisation. Some of it is about continuing to do what the organisation does, in a repeatable and stable way, and some of it is about finding new and better ways of doing those things, or indeed of doing different things altogether. Treating agile and platform as things which end at the boundary of something called IT reinforces a compartmentalised view more likely to exacerbate problems than to solve them.

It is also true that agile and platform are less distinct than the report argues. The idea of using commodity tools, services and platforms to create agile, radical and distinctive applications is hardly novel or distinctive.  That is, after all, a way of describing the internet and the services which run across it, which is all not as new-fangled as it used to be.

A better approach is to recognise that stable things and innovative things both need to belong to the organisation as a whole – or rather to the users of the organisation’s services – rather than artificially to something called IT or something else called business. The crucial point, though, is in their relative positioning. There is a strong case for activities designed to change what is done being agile; there is no case for those projects being focused on, still less driven by, IT change. That is wholly consistent with the agile philosophy, which emphasises user stories and engagement and participation from every relevant interest, but is less consistent with some of what seems to get labelled agile practice.

So I think we need a picture much more like the one below.  IT is still there, as it should be:  it is a skilled and professional discipline with an essential contribution to make, but it is only one element of thinking about system changes.  Both what is standardised and commoditised and what is customised need to be more comprehensive in scope, though their relative balance probably should be different.

So by all means, let us have commoditised platforms and let the provision, support and reliable operation of them be an IT service to the wider organisation. Even there though, we need to be careful about the life cycle of a commodity. Email and web access are commodities, but does that doesn’t necessarily make getting stuck with a ten year old email client and IE6 an attractive place to be. Government has a web CMS platform which was intended to be a commodity, but is locked inflexibly into a world before social media. And even the most commoditised of basic internal services can suffer if their development is approached in any less agile a way than any other project, with a particular risk of being optimised for specialist rather than more general users.

But when we get beyond commodities, the picture gets very different. I don’t think the authors of the IfG report see change as being purely about IT; quite clearly they do not. But the language of the report repeatedly gets trapped in making ‘IT’ and ‘not-IT’ the key organising categories:

The principles of the agile approach can be applied to almost any IT project and could also be considered for non-IT related applications. (p47)

Though that is almost immediately followed by:

Agile may have originated as a software development tool, but many of its principles can be used much more widely. Projects should be modular, iterative, responsive to change and have users at the core. (p47)

Much of its comes from the fact that at least to start with, agile is about software development, without ever being about IT. The principles it embodies can indeed be applied much more widely, in ways which themselves have a huge impact on the nature of the project. Indeed, I would go so far as to say that the application of agile principles is inconsistent with the concept of an IT project.

So it is not possible to make change management work better in government as long as the challenge is expressed as being about fixing IT or about distinguishing IT and non-IT projects. The power – and the threat – of agile is that it undermines sequential professional silos. From what I have seen of it, it is not inherently more difficult, but it is undoubtedly very different, and like any new approach adaptation and adoption need to be taken seriously. Particularly for large and cumbersome organisations, such as are occasionally found in the public sector, there is a real risk that agile becomes a trendy label for doing waterfall in fortnightly drops.

So by all means, let’s talk about platform, agile, commodities and customisation – but let’s not squeeze them under a heading of IT where they do not fit. Once we break free of that last critical element of the waterfall mindset, we can think seriously about how to do all this better.

That still leaves, of course, the question of how best to apply agile approaches, and indeed whether they can be made to work at scale in government at all.  That’s an important debate, the subject of a flurry of recent blog posts, but it’s too big a question to come to late in a long post. So that’s a topic to return to, but one where I will be arguing that agile gives us our best hope of reconciling the challenges of large scale change in public services with efficient and effective delivery.

Design Jam

I have written a couple of times about the gap I see between the brilliance of hack days, as exemplified by Rewired State, and the need to build customer needs and user experience into the mix:

These projects can get off to a great start using their originators as their own use case, but they won’t become sustainable on that basis. Government has painfully learned – or, rather, is painfully learning – that starting off with the assumption that you know what is best for people doesn’t deliver the greatest results. I am not quite sure where the tipping point comes between creator-evangelists and customer-centred design, but I am sure it has to come somewhere.

So I was delighted to spot this flowing through the twitter time line:

The concept of a design jam is a new one to me, but it sounds as though it’s a cross between a hack day and an unconference/barcamp:

Design Jams are one-day design sessions, during which people team up to solve engaging UX challenges.

While conferences and talks are very popular in the UX community, we don’t have many events for actual collaboration, like the ‘hackdays’ enjoyed by the development community. Design Jams get designers together to learn from each other while working on actual problems. The sessions champion open-source thinking and are non-profit, run by local volunteers.

Sounds like a fantastic idea, even though I am left slightly wondering how you do user experience design without involving some users. I am not remotely qualified to go myself, but would be fascinated to see the final presentations – it would be great if the organisers were to open those up to interested non-participants.

Tickets are available from 1pm on Monday.

Release early, release often

The Google Reader team are pleased with themelves, not without reason:

Today we built the 500th version of Reader; over the 5 years that we’ve worked on Reader, that works out to almost two builds a week.

I suspect that’s distinctive, if at all, only in that they are both keeping score and saying so in public.  I heard an Amazon person talk about version 300 or so quite a few years ago.  Closer to home, I remember Tom Steinberg saying the N010 petitions site was on version 3 (or was it 5?) at the end of the first day of live running.

Some of that, of course, is about how you to choose to count things.  But a lot of it isn’t.

Re: Re-Rewired State

Rewired State has taken another step towards becoming the next generation systems integrator for government.  In a piece of delightful recursion, a Rewired State project becomes the vehicle for accessing the formal status of Rewired State – or as it has been since last Monday, Rewired State Ltd.
Companies Open House - Rewired State - Recursion
In other news, the Rewired State gang has just announced their plans for March 2010.  Last year’s National Hack the Government Day still sticks in my mind as something of a personal turning point.  It was the first time I got a sense of the energy and opportunity represented by that community, and as I summed it up in my account of the day:

The simple fact that lots of smart people thought the best thing they could do with their Saturday was to think really hard about how to make government better is a force for good we cannot afford to lose.

This year, that’s just one of four events being run under the Rewired State banner.  What’s really interesting is what appears to be an entirely non-accidental absence of any sense of groundhog day.  The world has moved on in the last year – for which the Rewired State crew can take some of the credit.  The question is no longer, can bright people do smart things with government data?  That is proven beyond challenge.  The question now is how those ideas can break through from demonstrator and prototype to robust and scalable service, and from services which are available and potentially useful to services which are used and celebrated.  So it is really interesting to see that two of the four events involve paying developers to tackle specific problems while still leaving plenty of space for the more anarchic hack day itself.

I have got a longer post I had been writing over the last few days on some of these issues before any of this was announced, which should see the light of day imminently.  But since I am absolutely unqualified actually to take part in Rewired State (the last time I did anything which looked at all like coding was in 1987, and it wasn’t many years before that that I learned error correction by hand punching paper tape…), it can’t be too soon to start blagging for a seat at the presentations.