Minimum viable should also be maximum attainable

Last week, in a post ostensibly about fixing printers (so all is forgiven if you glazed over and didn’t read it), I talked about rapid and iterative design, but wondered how minimal a minimum viable product could be:

There is a risk the other way too, though: how good does it have to be to reach the minimum standard? Is anything functional good enough for a first attempt?

That stemmed from a concern that minimum and minimal are too easily confused, particularly for people with a history and culture of development not rooted in agile approaches.

Then today, through a tweet by James Moed, I came across a post by David Aycan with a title which encapsulated my thought much better than I had managed myself: Don’t Let the Minimum Win Over the Viable:

An MVP should be the easiest way to test your hypothesis, but that doesn’t mean that building one is easy. A common mistake is refusing to tackle the tough technical problems that create revolutionary offerings. As Ries writes, some entrepreneurs hear “minimum viable” product as “smallest imaginable” product. This misunderstanding of Lean Startup tenets can have expensive consequences.

As part of my own journey towards understanding and internalising some of these concepts, I started being very uncomfortable with the idea of minimum viability. I find it hard to separate from connotations of the least you can get away with, which in one sense is not the idea at all (though in another sense, it is absolutely the idea, which is perhaps the source of the confusion). So I came up with the idea of the Maximum Attainable Product, not as a way of describing a diferent thing, but as a different way of describing the same thing.

Maximum attainable doesn’t mean building something big and unwieldy. Just as the best camera is the one you have with you, so the best product is the one you can start using. It means making the best and most ambitious thing possible within the discipline and constraints of the development approach being used. It means not compromising on quality and effectiveness while deliberately focusing on a reduced set of scope and needs addressed. In other words, it means (I think) the same as minimum viable.

Now I am not so sure that maximum attainable is the right phrase either. Its connotations of bloat and delay are no more escapable than the opposite connotations of minimum viable.

So I think I am hankering after a way of saying maximally attainable minimum viable product. Or possibly the other way round. Only with fewer words. All suggestions for a more felicitous turn of phrase are very welcome.

Print and be damned

The private sector, as everybody knows, has clear and customer-friendly systems and services, because the customer is king. The public sector, on the other hand, revels in the obscure and incomprehensible because… well, just because.

And so it was with a light heart that I set out to fix a security flaw in my printer (one of those chores which never existed until Moore’s law came along to free us from drudgery).

The rest of this post sets out the process in mind numbing detail, partly as minor therapy for me, but more importantly because as with several recent posts, telling these stories is a powerful way of bringing to life why service design is not just an affectation and because every one of them is a reminder that we are constantly at risk of inflicting this pain unless we are as constantly vigilant in avoiding it. Fixing printers, opening bank accounts and paying for parking permits don’t on the surface seem to have much in common, but in each case the problem was not that the task could not be completed, it was that there was no sign that anybody involved in designing the service had ever systematically worked through what it would be like to do it as an ordinary user.

The blow by blow account which follows is a fairly grim one, but there is some good news. I went through this process and wrote the account back in January but never quite finished it. Quickly checking the links today shows that the process is now not quite as bad as it was then, with broken links fixed, cumbersome steps made slicker and FTP documents converted into web pages (it also reveals that the printer needs updating again, but lets not worry about that for now).

But that in turn throws up an interesting question about agile approaches. Iterate, then iterate again say the GDS design principles:

The best way to build effective services is to start small and iterate wildly. Release Minimum Viable Products early, test them with real users, move from Alpha to Beta to Launch adding features and refinements based on feedback from real users.

I am with them on that, not least because of the risk they identify in the small print of bottleneck by specification. There is a risk the other way too, though: how good does it have to be to reach the minimum standard? Is anything functional good enough for a first attempt? If we are feeling relatively generous, we can set the bar fairly low for publishing a security alert where there may be real urgency (though it’s not as if this was the first time there had ever been a security alert), but there is a cost. If it was bad enough the first time, it doesn’t matter how good it gets by the second or third time, because I won’t be back to see. Not so much of an issue, perhaps, for an obscure bit of technical support, but critically important in trying to establish a customer relationship.

So the thing which is too often missed in the practice, rather than the rhetoric, of minimum viable products (though not, to be clear, by GDS) is that the minimum is about scope more than it is about quality. That’s not to say that further iterations won’t improve quality; done well of course they will. It is that iteration and user feedback are no substitute for not having done a good enough job in the first place.

And so to the quest through a land of magic and adventure…

Continue reading

Two worlds, not quite yet colliding

I went to two events yesterday.

The first was the launch of the Government Digital Service, or rather a housewarming party for their shiny new offices. In fine agile tradition, they put on a slick show and tell with short sharp presentations about their work and achievements topped and tailed by Francis Maude, Mike Bracken (who has blogged his account of the day), Martha Lane Fox (likewise) and Ian Watmore. There was an enthusiastic crowd of supporters twittering furiously and other blog posts are starting to appear [added 10/12 - and GDS has now posted the presentation material and links to press coverage] . The dress code was smart casual, with a lot more emphasis on the casual than the smart. There was a buzz, a sense of creativity and spontaneity, of energy and talent unleashed, of an approach which felt a million miles away from both the stereotype and the reality of government projects.

Frustratingly, I had to leave early to get to the second event.

That was a much more sombre affair, closed and closed in, in an anonymous Treasury meeting room. The programme I work on was being reviewed, to check that we are managing effectively and are on track to deliver. There was little that was casual, in dress or anything else. There were plans, business cases, critical paths, migration strategies, decommissioning strategies, privacy impact assessments and a pile of other stuff besides. There was pointed questioning on risks, affordability and resilience. I make no complaint about that. We are spending public money – rather a lot of public money – and we should be challenged and tested on whether the spending is wise and the results assured. The track record of large government projects is not so great that there is room for complacency. But it felt a very long way from the world of GDS.

That matters, because actually the two are very closely linked. They represent, in effect, different ways of thinking about the same problem, and have roots in some of the same people and ideas. And in recognising that, I suddenly realised that I had rediscovered a thought I had first had at a seminar I went to almost four years ago, where both approaches were represented, each largely talking past the other. Tom Steinberg, who spoke at the point of inflection between the two, memorably started by saying that he completely disagreed with everything which had been said in the first half, and that the solution to the problem of big blundering IT projects was to have small fleet of foot projects, not to find a cure for blundering. I reflected on the apparent tension then as I reflected today:

And then the penny dropped. The apparent gulf between the two parts of the seminar is itself the challenge.

We need to apply two different sets of disciplines (in both senses), in two separate domains:

  • An approach to the customer experience – both offline and online elements – which is flexible and responsive and which maximises its exposure to customer intelligence in order to do that
  • An approach to the supporting processes which is robust, consistent and correctly applies the full set of rules

The collective culture and skills of government are much more geared to the second than the first – and the risk is not just that we don’t do the first as well, but also that we can all too easily fail to spot the need to do it all. The first is where there is the greatest need for change, flexibility and responsiveness – and where tools and approaches are available to deliver that responsiveness. The second requires the hard grind of implementing big robust systems which do the transactional heavy lifting as invisibly as possible.

Of course the distinction isn’t an absolute one, and of course each domain needs to incorporate the key strengths of the other.  But if we confuse them, we are at risk of getting the worst of both worlds.

My view has changed in the four years since then. I no longer think they are two different domains, they are aspects of what should be intrinsic in any approach (though scale and purpose will drive balance and relative importance). But perhaps there is a risk that big projects are still too much trying to learn the lessons of the last decade and too little trying to anticipate the needs of the next. It is no longer enough for systems to work (though they do, of course, absolutely have to work); they must work well, and work well specifically for the people who will use them. Or, as Helen Milner reported Mike Bracken as saying at another event yesterday:

[blackbirdpie url="https://twitter.com/#!/helenmilner/status/144713550638755840"]

That makes a lot of sense to me, though only if it is understood that in this context function is an integral part of beauty (as Brian Hoadley rightly challenged).

Conversely though, it is not enough to make beautiful things, though, perhaps less obviously but no less necessarily, they do need to be beautfiul. It is essential that they work and work well too.

Looked at one way, the core mission of GDS (and not just GDS) is to make beautiful things which work well. That means some of the values so apparent in the GDS event need to be more obvious in many other aspects of the work of government. We will have made great progress when discussions about projects in anonymous Treasury meeting rooms are more like the world of GDS. But as increasingly function begins to underpin beauty, it may also mean that the palest shadow of the Treasury meeting room also needs to fall across the sunny loft which is GDS.

One of the key tests of the success of GDS will be that when their turn comes to give an account of themselves in that room in Treasury, their approach is recognised and valued – and the work of every other project is being tested against it.  And another key test may be that that room will be a bit less anonymous, with its own wall of post-its and whiteboards.

Pictures by Paul Clarke

Alpha gorilla

Everybody who has had much to do with the development of government web services knows that there have been failures of imagination, failures of bravery, failures of technique and failures to seize opportunities – as well as successes in the teeth of opposition and incomprehension. Few have had the opportunity to start from scratch (though those who have have often made good use of that opportunity). So there are inevitably people who will look with envy at what the alpha gov team has achieved and, just as importantly, what it was given licence to achieve. Relly Annett-Baker caught that sense in her recent post:

The frustrating part is plenty of people before Alphagov could see the problems and probably a good few of the solutions too. They were not able to act on them (and many have privately told us of their struggles). And they probably feel like, well, like how everyone feels when the consultants waltz in and say exactly what you’ve been saying for the last however many months. We have been given the utopian blank slate that others have only dreamed was possible. To those people, I can only say this: we aren’t wasting the opportunity.

But everybody who has had much to do with the government web services also knows the complexity of forces which bear down on creativity and design choices, sometimes from undue caution but at least as often from the fact that genuinely contradictory pressures have to get reconciled.

That’s where it starts to get interesting, because Alpha gov is beginning to find itself in this territory. It has come under criticism for the choices it has made about accessibility, for compromising on its approach to UX and even for the amount of white space frivolously scattered about the site. To my mind much more interestingly, questions are also being raised about its scalability and extensibility. One commenter on alpha gov’s about page puts it this way:

It looks good. Vast improvement on Directgov. Alpha seems like a great way to test and design the public face of e-govt and I’m sure a lot of the comments you get will praise the big leap forward in usability on show here. I hesitate to say this, but that’s the easy bit. Does your remit with Alpha go as far as testing the other side of this – i.e the other end of the transactional processes, within the Departments? It’s just as important that that alpha provides Departments with the flexibility, functionality and autonomy they need to adapt and develop their products and online services quickly, as it is to make sure the public interface works well. I suspect this will be hard though – the barriers will be more cultural and political than technical.

Alpha gov is a proof of concept. But what concept has it proved? That there are more arresting and more user friendly ways of building a government navigation site? Definitely. That starting with what users actually want to do, and then helping them do it is a good and (in this context) radical approach? Assuredly. That this could replace Directgov or become the heart of the single government domain (whatever that is)? Well, no. Not because it is clear that it couldn’t do those things, but because that is not what it has been built to test.

So what, then, is this alpha? Is it an alpha gorilla, asserting dominance and superiority? Or is it alpha software, tentatively tiptoeing into the daylight for a short and critical life before being cast aside?

The name is supposed to connote the second. But because of all the doubts, uncertainties and insecurities described above, some will inevitably hear the first. Tom Loosemore is horrified by that possibility. I don’t have a scintilla of doubt in his good faith but objectively, as the marxists used to say, I think he may be wrong. The purpose of alpha gov is to challenge, to point fingers at the past and so, by implication, at those who have played parts in creating that past. The position it is aspiring to occupy is not some marginal piece of unimportant communication to a group nobody cares about, it is to be the new paradigm for the way the whole of government interacts with its citizens. It is to be the alpha gorilla, even if its chosen weapon is the alpha site. Aspirant alpha gorillas have to fight to establish their position. Some succeed, and dominate the pack (at least until the next aspirant comes along). Some fail, and are ejected. What we are seeing is the beginning of that fight.

I don’t think Tom and the alpha gov team need feel apologetic about that. But equally, I don’t think that most of those involved in creating the set of things alpha gov is there to challenge need to feel guilty or apologetic either. That’s because alpha gov is, in one important sense, a sleight of hand. It is proposing a technical solution to a supposedly technical problem. That’s good, but technology is not, fundamentally, the reason why the government’s web presence is as it is. The real problem is not technology but sociology. To the extent that the structure of government has been designed at all, it has been designed to be delivered in ways which can be managed. Government is not fragmented as an accident but as a way – for a long time the only possible way – of getting things done. One result of that, as I have argued before, is that there is no such thing as the government. The question then becomes, how in a world of rich and complicated public services, detailed legal frameworks (often highly specific to the service they regulate), every conceivable combination of personal characteristics and needs, and long tribal histories we can nevertheless make things better by deploying the new and more powerful tools we now have available.

From that perspective, the primary power of alpha gov is not as a solution, but as a catalyst. It does less to provide answers than those who built it might have hoped or thought. But it does very starkly pose a question and demand an answer. Who chooses to pick up that question and answer it may show who is the real alpha in the pack.

Is this service design thinking?


I went a couple of weeks ago to a fascinating discussion about the nature of service design, organised around a book published last year called This is Service Design Thinking. The two editors of the book were due to lead the session but were at the wrong ends of a skype three way video conference which stuttered into a dalekian half life without really quite making the breakthrough into comprehensibility. After various attempts to rewire, reconfigure and reboot, we gave up and had what turned into a good conversation among the dozen people round the table in London.

I wasn’t taking notes, so I am not even going to attempt to capture the range of the discussion. Instead I want to reflect on one of the topics we touched on and on some subsequent thoughts about the different positions of public and private sector and smaller and larger organisations.

The topic, slightly unexpectedly, was on the question of whether there was really any such thing as service design in the first place. Given the selection bias inherent in a group united round a book on the subject, that felt a little odd, but reflects the novelty and uncertainty of service design as a discipline. As the book brings out, perceptions of service design can strongly depend on the the starting discipline of the person doing the perceiving: anthropologists and marketing people tend to see the world rather differently. Several members of the group questioned whether service design was really user experience design talking itself up. I found that fascinating, because for me one of the attractions of the label is precisely to get away from questions of channel specificity: the service which needs to be designed is the complete set of interactions which take a customer securely, confidently and effectively from arrival to departure, treating that as a form of glorified interface design seems to slightly miss the point. That indeed points to one of my frustrations with my (very fast and superficial) reading of the book, that its case studies start too far in: the focus is very much on how design questions were answered rather than on how and why the questions were formulated – what was it that made the case study clients think that they wanted some service design in the first place? Interestingly, the one exception is the MyPolice case study (even allowing for my bias in its favour), which I suspect has a lot to do with being a self-generated project where service designers and product owners were the same people.

That mirrored the perception by some in the discussion that service design was a fine idea for small scale experiments in the voluntary sector, but wasn’t (or at least wasn’t yet) something which could be offered to a demanding commercial client interested only in immediate increases in sales. The sense seemed to be that it was too nebulous and unproven, and that clients overwhelmingly preferred to buy specific skills in the minimum necessary quantities rather than some more broadly based and perhaps more amorphous service (and worth noting that many of the participants were in the web consultancy business one way or another, with billable hours the only real currency).

There is a common perception in (and of) the public sector that we are constrained by process and caution, insulated from commercial pressures to be innovative and creative, and so compelled to lag behind the cutting edge of the private sector (a pattern perhaps illustrated by no fewer than three recent posts on public sector procurement, all very well worth reading). So it was fascinating for me to see a sort of reciprocal envy, from people in intensely commercial roles and organisations, who saw in the public and voluntary sectors a degree of freedom from very short term financial metrics which could allow more integrated and less hard edged approaches to flourish.

It was an interesting discussion, though not the one I had expected, based on a book which perhaps shares the power and fragility of the concept which is its subject. It is itself, very self-consciously, a piece of service design, using colour, symbols and layout to provide a much richer experience than any more conventional book. That added up to a splendid reminder of how much more powerful and engaging a book can be than any online substitute yet devised – and yet elements of the design self-consciousness were as irritating as others were inspiring. It all felt as though it was trying just a little too hard to be clever, and perhaps that is an indicator of a discipline with huge power and potential, but which has not quite reached the maturity and self-confidence to make best use of them.

There is no such thing as IT

I have cracked the problem of government IT.

A few months ago, I argued that there is no such thing as the government.  Now, in a further breakthrough, I have realised that there is no such thing as IT either. Putting those two thoughts together leads unavoidably to the conclusion that the problem of government IT is not what it appears to be.

The prompt for this realisation is last month’s report from the Institute for Government, System Error, bravely subtitled, ‘Fixing the flaws in government IT’. The first thing to say about it is that it is an excellent piece of work, setting out very clearly some of the problems which beset large IT projects and how those problems would be better avoided, not by sticking ever more rigidly to formal processes which are themselves root causes of the problems, but by switching to radically different approaches to developing and delivering projects.

But the second thing to say about it is that it is deeply flawed in ways which stem from perceiving all of this to be an IT problem in the first place. Grouping issues together because there is a technology dimension is not wrong, but nor is it the only approach and, I would argue very strongly, it is not the most useful approach to addressing the problems the report is seeking to solve. The conclusion I draw from the report’s own arguments is that seeing this as being about fixing the flaws in government IT is itself one of those flaws.

I claim no originality for this thought. I came into the world of e-government just a report was being completed on making government IT projects work better, in response to a perception that too many were failing.  Although the report was called Successful IT, its very first page stated firmly that:

Our most important message is that thinking in terms of ‘IT projects’ is itself a primary source of problems. Delivering IT is only ever part of the implementation of new, more effective, ways of working. The IT has to fit closely, for example, with the demands of the public and the new working practices needed to produce the desired changes. Achieving this requires a clear vision of the context in which IT is being implemented.

A change of approach is needed. Rather than think of IT projects, the public sector needs to think in terms of projects to change the way Government works, of which new IT is an important part. Our recommendations aim to achieve this change.

I don’t for a moment suppose that the authors of System Error would disagree with any of this, but it doesn’t seem to be a central theme of their own thinking.

Instead, they focus on two basic concepts:

  • platform which is essentially about the commoditisation of as much as possible, reinforced by common standards and functions across government; and
  • agile which is about how new systems and services are developed (and so by almost necessary implication, things which are not commodities).

Their analysis of each of them is powerful and persuasive, but I think in the end takes them to some unhelpful conclusions because of a perception that the two are linked by being part of something called IT.  Slightly oddly, the report contains within it the seeds of a very different  – and to my mind much more powerful – understanding of both problems and solutions than it in fact draws from the evidence.

The body of the report opens with a strong attack on traditional waterfall approaches, particularly for the kinds of problems governments often find themselves needing to deal with:

Government is frequently tasked with tackling ‘wicked issues’, such as reducing child poverty or combating antisocial behaviour, where no one can be certain what the best solution is. For these kinds of complex problems, the best way to determine what works can often be to experiment, learn and improve as you go in an evolutionary approach. However, government IT has rarely supported this iterative and exploratory style of developing solutions. When uncertainty is high, ‘locking down’ solutions is exactly the wrong approach and virtually guarantees that a sub-optimal solution will be developed. (p28)

With equal fervour, the following chapter describes a very different approach:

Compared with traditional tools and methods, agile offers a fundamentally different approach to tackling business problems. Where the traditional approach favours complete solutions developed in a linear fashion, agile encourages a modular approach using short iterations to learn and adapt. Instead of trying to lock down requirements and minimise changes up-front, agile encourages continuous experimentation, improvement and reprioritisation. The approach to user involvement also differs fundamentally. The traditional approach favours heavy user engagement up-front to determine and lock down detailed specifications and at the end to test the final product. In contrast, agile embeds users in the process for the duration of the project, making them an integral part of the development team rather than a constituency to be consulted. (p31)

That’s an approach which certainly started off as a method for software development, but that’s not to say that’s the most helpful way to think about it in this context.  The message I get from the report is that IT is too separate from the business and needs to get closer to it.  That’s not wrong, and if it were the only available option, it would be well worth doing.  But at one level, all that’s doing is moving from the picture on the left below to the one on the right.

That’s not a bad change to make, but it doesn’t look very exciting, and it isn’t – and in both versions, both agile and platform stay firmly in the IT circles.

I am increasingly of the view that those pictures embody an unhelpful model of how organisations and services actually work – and obscure a fundamental division of labour within the IT function to boot. Neither ‘agile’ nor ‘platform’ happens in isolation in the IT organisation, or rather they do, but that’s a problem to be addressed, not a state of affairs to be emulated. Instead, those words each describes part of what, typically, happens across the organisation. Some of it is about continuing to do what the organisation does, in a repeatable and stable way, and some of it is about finding new and better ways of doing those things, or indeed of doing different things altogether. Treating agile and platform as things which end at the boundary of something called IT reinforces a compartmentalised view more likely to exacerbate problems than to solve them.

It is also true that agile and platform are less distinct than the report argues. The idea of using commodity tools, services and platforms to create agile, radical and distinctive applications is hardly novel or distinctive.  That is, after all, a way of describing the internet and the services which run across it, which is all not as new-fangled as it used to be.

A better approach is to recognise that stable things and innovative things both need to belong to the organisation as a whole – or rather to the users of the organisation’s services – rather than artificially to something called IT or something else called business. The crucial point, though, is in their relative positioning. There is a strong case for activities designed to change what is done being agile; there is no case for those projects being focused on, still less driven by, IT change. That is wholly consistent with the agile philosophy, which emphasises user stories and engagement and participation from every relevant interest, but is less consistent with some of what seems to get labelled agile practice.

So I think we need a picture much more like the one below.  IT is still there, as it should be:  it is a skilled and professional discipline with an essential contribution to make, but it is only one element of thinking about system changes.  Both what is standardised and commoditised and what is customised need to be more comprehensive in scope, though their relative balance probably should be different.

So by all means, let us have commoditised platforms and let the provision, support and reliable operation of them be an IT service to the wider organisation. Even there though, we need to be careful about the life cycle of a commodity. Email and web access are commodities, but does that doesn’t necessarily make getting stuck with a ten year old email client and IE6 an attractive place to be. Government has a web CMS platform which was intended to be a commodity, but is locked inflexibly into a world before social media. And even the most commoditised of basic internal services can suffer if their development is approached in any less agile a way than any other project, with a particular risk of being optimised for specialist rather than more general users.

But when we get beyond commodities, the picture gets very different. I don’t think the authors of the IfG report see change as being purely about IT; quite clearly they do not. But the language of the report repeatedly gets trapped in making ‘IT’ and ‘not-IT’ the key organising categories:

The principles of the agile approach can be applied to almost any IT project and could also be considered for non-IT related applications. (p47)

Though that is almost immediately followed by:

Agile may have originated as a software development tool, but many of its principles can be used much more widely. Projects should be modular, iterative, responsive to change and have users at the core. (p47)

Much of its comes from the fact that at least to start with, agile is about software development, without ever being about IT. The principles it embodies can indeed be applied much more widely, in ways which themselves have a huge impact on the nature of the project. Indeed, I would go so far as to say that the application of agile principles is inconsistent with the concept of an IT project.

So it is not possible to make change management work better in government as long as the challenge is expressed as being about fixing IT or about distinguishing IT and non-IT projects. The power – and the threat – of agile is that it undermines sequential professional silos. From what I have seen of it, it is not inherently more difficult, but it is undoubtedly very different, and like any new approach adaptation and adoption need to be taken seriously. Particularly for large and cumbersome organisations, such as are occasionally found in the public sector, there is a real risk that agile becomes a trendy label for doing waterfall in fortnightly drops.

So by all means, let’s talk about platform, agile, commodities and customisation – but let’s not squeeze them under a heading of IT where they do not fit. Once we break free of that last critical element of the waterfall mindset, we can think seriously about how to do all this better.

That still leaves, of course, the question of how best to apply agile approaches, and indeed whether they can be made to work at scale in government at all.  That’s an important debate, the subject of a flurry of recent blog posts, but it’s too big a question to come to late in a long post. So that’s a topic to return to, but one where I will be arguing that agile gives us our best hope of reconciling the challenges of large scale change in public services with efficient and effective delivery.

Design Jam

I have written a couple of times about the gap I see between the brilliance of hack days, as exemplified by Rewired State, and the need to build customer needs and user experience into the mix:

These projects can get off to a great start using their originators as their own use case, but they won’t become sustainable on that basis. Government has painfully learned – or, rather, is painfully learning – that starting off with the assumption that you know what is best for people doesn’t deliver the greatest results. I am not quite sure where the tipping point comes between creator-evangelists and customer-centred design, but I am sure it has to come somewhere.

So I was delighted to spot this flowing through the twitter time line:

The concept of a design jam is a new one to me, but it sounds as though it’s a cross between a hack day and an unconference/barcamp:

Design Jams are one-day design sessions, during which people team up to solve engaging UX challenges.

While conferences and talks are very popular in the UX community, we don’t have many events for actual collaboration, like the ‘hackdays’ enjoyed by the development community. Design Jams get designers together to learn from each other while working on actual problems. The sessions champion open-source thinking and are non-profit, run by local volunteers.

Sounds like a fantastic idea, even though I am left slightly wondering how you do user experience design without involving some users. I am not remotely qualified to go myself, but would be fascinated to see the final presentations – it would be great if the organisers were to open those up to interested non-participants.

Tickets are available from 1pm on Monday.

Release early, release often

The Google Reader team are pleased with themelves, not without reason:

Today we built the 500th version of Reader; over the 5 years that we’ve worked on Reader, that works out to almost two builds a week.

I suspect that’s distinctive, if at all, only in that they are both keeping score and saying so in public.  I heard an Amazon person talk about version 300 or so quite a few years ago.  Closer to home, I remember Tom Steinberg saying the N010 petitions site was on version 3 (or was it 5?) at the end of the first day of live running.

Some of that, of course, is about how you to choose to count things.  But a lot of it isn’t.

Re: Re-Rewired State

Rewired State has taken another step towards becoming the next generation systems integrator for government.  In a piece of delightful recursion, a Rewired State project becomes the vehicle for accessing the formal status of Rewired State – or as it has been since last Monday, Rewired State Ltd.
Companies Open House - Rewired State - Recursion
In other news, the Rewired State gang has just announced their plans for March 2010.  Last year’s National Hack the Government Day still sticks in my mind as something of a personal turning point.  It was the first time I got a sense of the energy and opportunity represented by that community, and as I summed it up in my account of the day:

The simple fact that lots of smart people thought the best thing they could do with their Saturday was to think really hard about how to make government better is a force for good we cannot afford to lose.

This year, that’s just one of four events being run under the Rewired State banner.  What’s really interesting is what appears to be an entirely non-accidental absence of any sense of groundhog day.  The world has moved on in the last year – for which the Rewired State crew can take some of the credit.  The question is no longer, can bright people do smart things with government data?  That is proven beyond challenge.  The question now is how those ideas can break through from demonstrator and prototype to robust and scalable service, and from services which are available and potentially useful to services which are used and celebrated.  So it is really interesting to see that two of the four events involve paying developers to tackle specific problems while still leaving plenty of space for the more anarchic hack day itself.

I have got a longer post I had been writing over the last few days on some of these issues before any of this was announced, which should see the light of day imminently.  But since I am absolutely unqualified actually to take part in Rewired State (the last time I did anything which looked at all like coding was in 1987, and it wasn’t many years before that that I learned error correction by hand punching paper tape…), it can’t be too soon to start blagging for a seat at the presentations.

Who is going to build new public services?

In a world of increasingly open government data, who is going to create the services?

Brian Hoadley has a powerful go at the answer:

Those who campaign for the release of Government data seem to fall into a few major camps:

  • Those who want more access to information because it will inform their work – e.g. the press via MP Expenses
  • Rights activists who once the data is free will move onto another cause – because that’s what they do
  • Those individuals who encircle Government who continually talk about how they could produce far better ‘Services’ than Government, at a fraction of the cost and time

Better access to data for those who monitor Government and then report on its activities will have certain benefits. We can all agree that some portion of the expenses scandal was beneficial and could lead to positive change in Government spending policy. We should also acknowledge the reality – that probably 80+ percent of the scandal was merely spectacle to earn revenue for news organisations.

I will admit that the efforts of rights activists will help groups 1 and 3 above by fighting a meticulous battle to gain access to what many term as Public data in any case.

But what about those ‘Services’?

To understand the drive behind this, we need to understand that with the Government in a precarious position due to over-extension of resources during the Recession, anything that could lead to a reduction of costs will look attractive. Take, for example, the appointment of a Digital Inclusion Champion to get the remainder of the UK population online.

Why would the Government do this?

Because long-term, the consumption of digital services, that can accommodate millions in the way a physical location cannot, will result in cost savings through the reduction of said facilities and staff to run them.

My instincts on this are very much like Brian’s:  industrial strength systems require some form of industrial strength management.  But it doesn’t at all follow that nothing has changed or will change (and I don’t imagine that Brian is suggesting otherwise).  There are several important forks in the path, which separately or cumulatively could lead to our ending up in quite a range of different places.

1.  Personal and impersonal

The focus of open data is very much on impersonal and aggregate data:  postcodes, mapping, crime statistics, school performance and a whole very long list more.  What all of that data has in common is that it can all just be handed over for people to play with and build new things.  Leaving aside the surprising ability of impersonal data to become personal in slightly unexpected ways, that can all be open and straightforward.  The issues suddenly get very different once personal data comes into the picture, because the same spirit of playful openness is simply not an option.  That’s not at all to say that personal data can only be handled inside government organisations but rather, as some of the points below start to explore, that different approaches and tools are needed to building services which deliver personal outcomes.

2. Front end and back end

Front ends can and should be simple, back ends often need to be complicated.  That doesn’t make either one inherently easier; it means that they are different kinds of problems.  Achieving elegant simplicity at the front requires hard work, but very different hard work from that required to achieve robust completion at the back end.  Brian goes on to talk about FixMyStreet, the inspired genius of which is that it doesn’t make the slightest attempt to solve the back end problem, it simply presents information to local authorities to do with what they will.  The LA then needs to diagnose what kind of problem it is and whose responsibility it is to solve it, identify and allocate resources, integrate with existing plans, schedule activity, undertake task, record completion and, ideally, somewhere along the way consider whether the problem could have been avoided in the first place.  I can see no obvious sign that systems to support that set of activities are going to come from anywhere other than their current sources (which is not to say that they will continue to be designed and built in anything like the same way).

That’s not at all an argument for government doing everything. Even big, complex, sensitive systems don’t have to sit wholly within government.  A large majority of tax returns reach government as structured data without having touched a government front end, because a whole lot of third party providers have found it in their interest to create front ends.

3.  Inside and outside

The question of how to do all this still tends to be framed round the assumption that it is government which holds personal information, that it has an obligation to limit access to and use of that information and that issues such as joining up services and sharing the data necessary to do so are problems which need to be solved and which only government can solve. Shifting the primary data store away from government (and other service providers) altogether – the volunteered personal information model – is one way of reframing the question.  But even that doesn’t take away the government big systems problem:  the piece of information you chose to share will often (but certainly not always) itself need to be stored in order to provide the service, to smooth future service provision and to provide assurance that the right things have been done.

4.  Facebook and Prince 2

Choices on those first three dimensions between them open up a huge range of futures, with no reason to suppose that any one of them will become the single universal model.  But unless we do something even more radical, they all still need there to be big transactional, personal systems (though they don’t necessarily require those systems to be owned or operated by government).  Brian’s answer is simple and, I suspect, right:

So who, in reality, will create those digital services? It will be same internal teams, companies and consultancies who currently work for Government.

In practical terms, they are the only ones who have the infrastructure and capital to go through ISO accreditation, PRINCE training, supply account and project directors, planners, technical architects, UCD experts, designers, developers, testers and hosting.

The argument in the past has been that those techniques are the only reliable way of delivering systems with the scale and resilience needed.  But the critical question now is not whether big complex systems are needed, but whether there is only one way of building them.  I have written before about the Facebook example, which is one of several which challenges the idea that robust large scale systems supporting high volumes of rapidly fluctuating personal data can only be managed in one way

None of that means that we will ever stop needing fast moving and often small scale innovation.  There are far too many examples of attempts to build big things in one go where either it proved too big to succeed, or got finished only when the rest of the world had moved on to something else –  or both.  But it does make the big challenge of the next few years look more about turning ideas and prototypes into boringly robust services than about generating new ideas – which is not to say that we won’t still need new ideas, just that it’s pretty clear that they will continue to come.

That’s why I was so encouraged to see Rewired State taking a step in this direction when I wrote about it earlier this week.  They are moving in from one end of the spectrum.  Now we need to get some movement from the other end.