15 September 2008
For many government services, the rules and regulations are horribly complicated. Working out which conditions need to be applied to which people in which circumstances is hard enough. Explaining that in a way in which normal people can understand can be harder still.
There has been years of sustained effort to make explanations more comprehensible by expressing them in simpler language (to say nothing of the efforts to make things genuinely simpler in the first place). There are now entire racks of leaflets adorned with the imprimatur of the Plain English Campaign. That’s no bad thing – although it is one of the reasons why some documents have become physically larger and potentially even more daunting than before – but it is at best a necessary approach, rather than one which can be sufficient. As the Leitch report on skills reported at the end of 2006, 5 million working age people then lacked functional literacy and 7 million lacked functional numeracy (para 2.11).
Agonising over minor changes of wording (or even wholesale re-writing) as a way of addressing the problem seems to be largely missing the point. Taking those same words off the printed page and putting them on a web page may make them more helpful for some, but again doesn’t change the nature of the basic problem. Something more radical is called for. So let us turn to the world of particle physics.
That’s another area of human endeavour where complicated and unlikely thoughts are expressed in language which is completely precise, but as completely incomprehensible to those who are not in the know. The launch of the Large Hadron Collider at CERN this week has brought vast interest not matched by great understanding of what it’s all for. The BBC had a great documentary which made me think that I might begin to understand this stuff. But better still, in the wonderful world of YouTube, there is a 4½ minute rap video
I am not sure that the rap explanation of how to complete a tax return would rocket to three million viewings quite as quickly as this one has done, but the principle is the same. The importance of new media is not that it is new, but that it allows us to change some fundamental things about how service providers communicate with customers and about how governments communicate with citizens – and of course the provision of basic information is only one small part of that. We can carry on tweaking our leaflets if we must, but the paradigm shift is waiting for us.
11 September 2008
We weave our memories together on demand, filling in any empty spaces with the present, which is lying around in great abundance. In Stumbling on Happiness, Harvard psych prof Daniel Gilbert describes an experiment in which people with delicious lunches in front of them are asked to remember their breakfast: overwhelmingly, the people with good lunches have more positive memories of breakfast than those who have bad lunches. We don’t remember breakfast — we look at lunch and superimpose it on breakfast.
We make the future in the same way: we extrapolate as much as we can, and whenever we run out of imagination, we just shovel the present into the holes. That’s why our pictures of the future always seem to resemble the present, only more so.
Cory Doctorow, Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future, p40. My emphasis.
9 September 2008
There is a famous New Yorker cartoon – two dogs sitting in front of a computer with the caption, “On the Internet, nobody knows you’re a dog”. I would save you the trouble of clicking on the link to see it, except that the New Yorker would like me to pay them $360 a year for the right to reproduce it on this small, non-commercial blog – which leads to the thought that their business managers are less web-savvy than their cartoonists. The contrast is stronger still, given the fact that the cartoon dates from 1993, when the ubiquity of the internet was rather less pronounced than it is now.
The chances are that you will have seen it before – it’s apparently the most widely reproduced New Yorker cartoon, presumably because it so neatly encapsulates the point that anonymity and the absence of verification are so central to the internet – and so central to its success as well as to its frustrations. If knowing whether you were talking to a dog or not was really important to you, designing the structure of the internet the way it is wouldn’t be the obvious way of setting about it. As David Weinberger reminds us in blogging a recent presentation by James Boyle, that openness and lack of rigid structures has a lot to do with why there is a successful internet to be worrying about in the first place.
But that leaves us with a problem. Sometimes it is really important to know not just whether you are a dog or a cat, but precisely which dog or cat – in a situation when just relying on the little disc attached to your collar isn’t an option. Making connections secure can be difficult in practice, but it isn’t hard in principle. I can access my office systems remotely with two passwords (one of them very long), a smartcard, a fully encrypted laptop and a fully encrypted VPN connection. There are lots of ways of fouling up the implementation of all that, but for all practical purposes it provides effective security.
The real difficulties start when we want to be secure and open. I want to be able to show information about you to you – and only to you, and I want to accept and act on information about you from you and only from you (or actually from you or anyone you want to act on your behalf, but we’ll leave that further twist to one side). But I also want the experience to be easy for users, for the simple reason that if it isn’t, they won’t.
In the beginning, web sites did this with user names and passwords. But left to their own devices, people choose very weak passwords, and if forced to have strong passwords, they forget them. So then there came shared secrets – in effect, a backup password to be used if you forgot your real one. The problem with that – as Bruce Schneier pointed out a while ago (and then pointed out all over again more recently) – is that it inadvertently undermines the security of the system, particularly if the so-called ‘secret’ isn’t very. It’s much easier for the bad guys to attack the shared secret than it is to attack the original secure password.
The third phase was when banks, in particular, started to use the shared secret as a second password in its own right, rather than as a backup to the first. That’s a real improvement since, assuming that knowledge of the two passwords is genuinely independent, two layers of defence are inherently stronger than one. More recently, banks have moved towards a second authentication factor, with keypads and smartcards used to generate one-time codes.
Another apparent way of strengthening shared secrets is to have several, including some which are subjective and so less knowable from other sources. That’s important, and becoming very more so: as Eric Norlin’s self-styled Norlin’s maxim states, ‘The internet inexorably pulls information from the private domain into the public domain.’ But even that may break as a system quite quickly: there is nothing online anywhere (as far as I know) which links my name with any of the schools I went to. The same may well not be true for the digital natives, as Norlin argues in support of his maxim:
In the context of choice being the identity default, we’re finding that the bulk of online users are choosing to place huge chunks of their identity online. My evidence: MySpace, YouTube, Facebook, etc. The heaviest generational component of the online community (the kids) rushes to identity themselves online. They flock to it so fast and so easily that its making federal lawmakers (and many parents) uneasy. Do these kids think that anonymity is or should be the online default?
The amount of personal data swilling around is now sufficient for a whole new industry – apparently calling itself ‘knowledge based authentication’:
The key is for a business to use a KBA system that bases its questions on non-credit data and reaches back into your public records history so that the answers are not easily guessed or blatantly obvious. Typically, consumers find credit based questions (what was the amount of your last mortgage payment, bank deposit, etc) intrusive and difficult to answer, and these type of answers can be forged by stealing someone’s credit report or accessed with compromised consumer data. Without giving away too much of our secret sauce, our questions relate to items such as former addresses (from as far back as college), people you know, vehicle information and anything else that can be determined confidentally while not exposing data from existing public data sources.
As Kim Cameron notes in quoting that material:
Why wouldn’t an organized crime syndicate be able to set itself up with exactly the same set of publicly available databases used by IDology and thus be able to impersonate all of us perfectly – since it would know all the answers to the same questions? It seems feasible to me. I think it is likely that this technology, if initially successful and widely deployed, will crumble under attack because of that very success.
My second concern regards the security of the system from the point of view of the individual; in other words, her privacy. IDology’s approach takes progressively more obscure aspects of a person’s history and then, through the question and answer process, shares them with sites that people shouldn’t necessarily trust very much.
The last point unravels things one step further: how secret can a secret be once it has been shared? Worse, still, what happens when it is known to have been compromised? As Navjeet Khosa observes, once the ‘true’ answers have been compromised, they are useless:
The situation was made worse when, after finding a virus on a machine I used, I had to call the bank and change the answers to all of the security questions. Now, when asked about the last school I went to, my favourite colour etc, I can never remember the answer required, because I had to change the correct one. The addition of more security – such as a card reader – is likely to make the whole process even more troublesome and take even longer.
She also cites interesting research suggesting that beyond a certain point, adding layers of security process may undermine, rather than reinforce trust:
Researchers Hokyoung Ryu and Kansi Zhang found that although enhanced security measures for Web banking may make the process “technically safer”, the more identity-checking steps that are required by a customer, the less ”trusting” they feel.
That clearly applies in other contexts too: too much security can feel odd, as Richard Clayton observes about the hoops he has to jump through just to provide a biannual return to the Pensions Regulator.
There is also a social dimension to shared secrets. They tend to emphasise early experience, presumably because current data is more at risk of being known to current acquaintances (though as noted above, that assumption may be becoming increasingly questionable), and there is an implied assumption that that early experience was singular, stable and memorable. By the time I was 11, I had been to seven schools and lived in five places. Asking me the name of my primary school, or the street where I grew up doesn’t work very well. I suspect (though without evidence) that many of the questions which tend to be used have quite a bit of cultural baggage which is unseen by those who pose them.
So what do we do?
The simple answer is that I don’t know. This post has got long and meandering because it is mainly a device for trying to get my own thinking straight, and it’s not altogether succeeding. One possible route is simply to cut through all these issues by adopting a different approach altogether. Applying two-factor authentication doesn’t have to be as complicated as the banks have made it. There is, for example, the YubiKey which is operated just by putting it into a USB port and pressing a single button, which is on the face of it a considerably more straightforward solution – at least, once the challenge of physical distribution has been cracked. More generally, there is always the possibility of more elegant solutions based on approaches such as the laws of identity (just out in a new condensed version – though I find the longer paper more helpful, because the questions here matter at least as much as the answers).
I’ll leave the last word – at least for the time being – with Ramon Rozas, who has written a short story called Security Question. The prose is a bit clunky, but it’s very short – and worth hanging on for the punchline.
28 August 2008
And still more on paying attention – or in this case, tools to support not paying attention. There seems to be a flurry of ways of shielding people from some of the more distracting effects of emails. So here is news of a programme called Freedom which shuts down all network connections from a Mac for a user-defined period of up to six hours – and once set you can’t turn it off except by rebooting, ‘the point is to make it difficult for your internet-addicted self to override your sensible “must focus” self’.
Hot on its heels comes Hit Me Later -
Just forward any email to email@example.com and we’ll resend it to you 24 hours later. You can replace “24″ with any number or day. For example, forward it to firstname.lastname@example.org and you’ll get it back four hours later. Send it to email@example.com and we’ll send it back to you the first Wednesday morning after today.
Even Microsoft is getting in on the act with a prototype email prioritizer which not only allows users to set rules about which emails are to be treated as important, but also includes a do not disturb button which stops the delivery of new mail.
None of those does anything to stop the email coming your way in the first place, of course, and in one sense they achieve nothing: nobody is forcing anybody to be distracted be incoming email. But the ability of humans to be distracted is a fact of life and the ability of email to be distracting is pretty unarguable. Much better of course to address the underlying problem than mask the symptoms, but it’s a sign of how desperate people can get that masking the symptoms is seen as worth doing.
That just leaves the question of what we might be able to do if all the distractions could be pushed away. Seth Godin has a suggestion for the quiet time at the end of August:
Do nothing except finish the project. Hey, you could have been on vacation, so it’s okay to neglect everything else, to put your email on vacation autorespond and your phone on voice mail and to beg off on the sleepy weekly all-hands meeting and to avoid the interactions with those that might say no…
And then finish it.
12 August 2008
In the spirit of paying attention to what I am paying attention to, I can’t help noticing that emails are still feeling oppressive.
To all employees:
Beginning August 1st, you will no longer be able to send an e-mail to another employee of our organization. After some study, we have concluded that such e-mails are almost never the most efficient or effective way to obtain, provide or exchange information. In fact, we estimate that as much as 20% of our employees’ time is wasted reading, writing and answering e-mails, beyond the time that it would take to communicate the same information using more appropriate means.
Instead of email, the hapless troops are enjoined to use:
- Instant Messaging
- Desktop Video & Screen-Sharing
- Instant Survey
Each in its proper place, each being used for what it can do best. But all of those come after the still simpler and more powerful idea that it is a central part of every employee’s obligations to make themselves available for helpful conversations with everybody else.
Read the whole thing, then reflect on the creative energy to be unleashed by implementing it. The self assessment is then less about the extent to which email is used inappropriately, much more about what other possibilities the organisation affords, both technically and culturally. As with any other change, there being another way which is clearly better is a prerequisite, but having the tools won’t create the change. At least that’s what I would assume. For the moment, I would settle for having some of the tools, so that sending an email doesn’t always have to be the answer, regardless of the question.
11 August 2008
Twenty years ago, a business case was required for the purchase of a single PC. Ten years ago, internet access was through a modem attached to a computer in a small locked room at the end of the corridor, with a book kept beside it to record sites visited. The past gets strange and distant very quickly.
At some point in that process, there was a reversal of expectations and experience. Technology and its applications in the work environment got overtaken by technology and its applications at home. It happened at different times for different people and different working environments, but it must now be true that a high proportion of office employees can do more with their own technology than with their employers’.
Increasingly, though, the disparity is socially driven, not technically driven. The constraints are less about what the technology can inherently do and more about where the user is allowed to go. As Techdirt recently reported:
Consulting firm Challenger, Gray & Christmas has come out with a study this week claiming that nearly one in four companies blocks employee access to social networks like Facebook and MySpace … [T]here are companies who probably ban Facebook at work — just like in the early days of the telephone there were those that banned telephones at work, and, more recently there were companies that banned email or the internet at work. Eventually, companies recognize that fearing communication tools tends to backfire. Embracing them tends to be a lot more productive.
So how do new tools and applications permeate the boundaries which organisations erect around themselves? Ars Technica has explanations based on culture, economics and the decentralisation of technology. They conclude on the first of those:
The end result is that consumers bring to the office the expectations that they’ve developed through their interaction with consumer hardware, and in most cases those expectations are frustrated by the reality of corporate IT. This phenomenon is also at work on the network, where users develop their sense of how networked apps (messaging, collaboration, and archival) should look and function through daily contact with the lively ecosystem of consumer-driven Web 2.0 applications. Next to something like Facebook or Google Maps, most corporate intranets have an almost Soviet-like air of decrepit futility, like they’re someone’s lame attempt to imitate for a captive audience what’s available on the open market.
But the Ars analysis makes it all seem fairly logical, though my experience suggests that that is often far from the case. Ostensible reasons for constraints collide with actual reasons to create a pile up which is hard to disentangle, and so hard to move on from. Part of that is about changed expectations about what is normal – email can be uncomfortable and slightly threatening, a core working tool, or frustratingly old fashioned, depending on the perception of user, rather than on anything about the technology. Those perceptions of users then run up against perceptions of the organisation: which uses of the system are safe, which are insecure, which are frivolous, which will at some point fall into (or are already in) the category of things which new entrants to the organisation will simply take or granted. Getting that right entails separating out two elements of trust which are often run together – trust to act sensibly and trust to act responsibly.
The first is about how people manage their time and work: if we unblock Facebook, does organisational productivity plummet? There is a simple answer, favoured by some, that anybody determined to be unproductive will find ways of being so regardless of whether they have access to social media or not. I don’t think that’s compelling either way in itself. The more pertinent point from an organisational point of view is whether communication literacy is complete without them, and even if it is now, for how much longer. That is – or should be – about how communication happens within the organisation and its customers. The real need is to find a way of trusting people to behave appropriately in an environment with which many of the people who would have to do the trusting are deeply unfamiliar.
The second is about personal hygiene: if we unblock Facebook, will users do stupid things which put data and systems at risk? That seems to me to be both easier and harder to deal with. The easy bit should be to be clear about the difference between real risks and the appearance of risk and not to default to locking down anything which looks even vaguely like the second. The hard bits come from the twin facts that there are bad guys out there and that the manipulation of social norms is one of their key techniques. We know that there is a market for the criminal misuse of our customers’ data, we have not the slightest desire to add to its supply. Brendan Peat, one of Don Tapsott’s team at Wikinomics expresses the strong view of relying on trust to move things forward:
My feeling is that in moving beyond the current model for information security is going to take a little bit of technology and a lot of trust. Web 2.0 tools and the Net Generation will both be additional factors that push the issue to the forefront at leading organizations. Companies will need to move to a model of ‘decentralized security’, which I see as basically allowing users to manage their own security permissions. Organizations will first start experimenting with information inside the firewall, but eventually they will need to evolve and extend beyond the walls of the enterprise.
The problem is that organizations today need to be agile, reconfigurable, be able to leverage partners and third party expertise. Unfortunately to operate in this new environment security and permissions need to be dynamic and flexible both internally and externally. To become a next generation enterprise it will be increasingly important to both empower and trust employees when it comes to information and security decisions.
That’s all very well in principle. And I can see how it might work in organisations which are both (a) relatively small and highly cohesive (so that there’s plenty of offline trust to start with) and (b) hold information which is not, in general, specifically attractive to criminal outsiders. We seem a long way, though, from a state where we could be confident that individual users would make the right trade offs (it sometimes feels as though we are still a long way from the people whose job this is making the right trade offs). Saying that we need to empower and trust individuals is fine in principle, but it serves only to take us on to the next question: empowered and trusted to do what with whom in what circumstances. There can be principles for that, but the attempts I have seen fall into a common trap with guidance: they look helpful when you don’t have a problem and don’t need guidance, but fail miserably just at the point where you are sufficiently uncertain to turn to the guidance. So we come back again to trust: what are the steps we need to take which will get us to enough people being comfortable that greater openness based on a wider range of tools is the right way forward?
8 August 2008
Two weeks away, one week back at work. What have I achieved?
In two weeks away, some reflection, some reading, some thinking – or doing a part of my job which I enjoy (enough to relax by doing it), which I think is important, and which I do far too little of while I am actually at work. In one week back, I have more or less wrestled my pile of emails into submission, had some useful conversations and had the very rewarding sensation of my team getting on with doing great things without me – but have not done any of a list of more substantial tasks which are less urgent but perhaps more important.
As ever, Seth Godin is pithy and pertinent:
When you’re done with your email queue, are you done?
Do you spend your day responding and reacting to incoming all day… until the list is empty? … and then you’re done.
I’m noticing that it’s easier than ever to have that sort of day. Online tools are arranging interactions in a line, allowing you to feel satisfied with a constant stream of incoming alerts and pings.
Years ago, I got my mail (the old fashioned kind) once a day. It took twenty minutes to process and I was forced to spend the rest of the day initiating, reaching out, inventing and designing. Today, it’s easy to spend the whole day hitting ‘reply’.
Carving out time to initiate is more important than ever.
In a slightly different context, Scott Rosenberg recalls advice he attributes to Howard Rheingold – to “Pay attention to what you’re paying attention to.” Scott’s interest is in how he is spending his ‘media time’, but I think the advice is better and broader than that. Being alert about what is influencing your thinking and perception is always useful. For those of us working in large organisations, that’s particularly important, not least as a way of reducing the risk of group think, of doing things this way because this is how we do things. It’s also important as a way of measuring where we are in the constant tension between urgency (or at least expressed urgency) and importance – and as with so many things, measurement is necessary, though far from sufficient, if there is to be control.
One technique I have found useful is to think in terms of time budgeting. How much time am I prepared to spend working. Within that, what’s the most important thing I need to do, and how much time should I commit to doing it. Iterate until time is accounted for. Of course in the real world that needs to take account of other people’s needs and preferences – but it also leads pretty forcibly to the conclusion that responding to every clamour for attention from emails and meetings is a rapid route to perdition. The analysis is one thing. Putting it into practice is quite another. But this is a useful reminder to have another go.
16 July 2008
danah boyd has been using the launch of the new iPhone as a prompt to reflect on cluster effects -’the cool things that people do when all of their friends can do the same things’. The crucial thing about that definition is that it is not about mass adoption as such, it is about reaching the critical mass of adoption within a particular social group.
Right now, a phone is primarily a 1-1 communication device and, if you’re lucky, an information access device and a portal to the web. Interesting things can happen when the mobile is a platform itself. In other words, when you can assume that everyone around you has the same tool, you can start doing networked activities that don’t rely on a website. Cluster effects in mobile will be what happens when the LCD is not texting. From there, you can innovate. Sure, we’re going to see a plethora of mobile social network sites and mobile location friend services and mobile dating and mobile media sharing communities. The first wave will always be a translation of the web. But once you have cluster effects, you can also start innovating and finding new services and tools that allow people to connect in meaningful way. New games can emerge. New social services. Innovation in this space will be iterative – it will involve throwing things out to the market and seeing what consumers do and do not do. It will require iterating based on their practices and not trying to shove those curvy creatures into square holes. But there’s no point in leaving the starting block until cluster effects are underway because, sadly, iterating in imagination land inevitably leads to techno-utopian fantasies instead of meaningful applications.
Of course she is talking about a particular demographic: iphones are not much in evidence among the customers of my organisation. Texting remains the HCF as well as the LCD – and for many, even that is exotic and unfamiliar. But despite that, putting it together with three other pointers drives out some powerful conclusions:
The first is that the biggest impact of the iphone will come after the smart people have moved on to the next thing – following Yogi Berra’s famous principle that, ‘Nobody goes there no more, it’s too crowded’. As I noted a while back, the iphone is a symbol of the future, even if that future is still several years away for many, and won’t be called the iphone when they get there.
The second is that the increasing elegance and power of the gadgets is reinforced by the growing power of mobility. As Vint Cerf is cited as saying early last year:
The future growth of the Internet lies in the hands of mobile phone users, not computers
He was primarily drawing a first world/third world contrast, but the point is just as forceful looking at gaps and patterns within societies. The era of the landline is beginning to slip, displaced by the social and economic context of mobile phones (no contract, no risk of debt, unaffected by changes of address) as well as by their utility.
The third is danah’s last point. We won’t know what all that will be good for until it becomes clearer what it is good for. The waiting doesn’t have to be passive, though. We can attempt to understand what people might want to do independently of our current ability to support their doing it – which is fine up to a point, so long as we understand that that point is quite limited. That’s partly the result of the general chaos effects which afflict weather forecasts, but it is also more specifically a result of our not yet having even thought about how the way the world works is affected by the universal and ubiquitous use of powerful but pocket sized network devices.
Not everyone has access to the internet. But we are well on the way to pretty much everyone having a mobile phone. And in parallel with that, what those phones can be used for is growing almost as fast.
A while ago, I predicted that within three to five years, the sophisticated communication features now found in phones at the top of the market would have percolated through to be being universal. I still expect to see that within one to two upgrade cycles. But perhaps we won’t have to wait that long. Two new services from Amazon (so far in the US only, as far as I can see) show how much mileage there may still be in the humble text message. The first is the ability to buy things from Amazon. While it’s extraordinary to think of the entire Amazon catalogue reduced to 160 character screens, this has something of the feel of a dancing chihuahua – impressive that it can be done, but very hard to see the use of it.
The second service is potentially more interesting – using a mobile to send and receive money, either to another mobile or to an e-mail address, so that you can ‘pay your friends, split a restaurant bill or pay your babysitter’, as the web page helpfully suggests. The idea of using phones to store money which can be used to make small payments is not new – though evidence that doing that has seriously taken off anywhere is harder to come by. But the earlier ideas seemed more focused on using the phone as a smart card with proximity devices rather than having much to do with phones being phones. Amazon, by contrast, is making use of the pretty basic fact that phones are communication devices.
That ought to have significant potential for public services, just because there is an important part of the population, many of them substantial users of government services, for whom this is their dominant modern communication tool. There are some early examples around (and more on the excitements of car parking in a post still to come), but it would be good to find ways of making better use of the ubiquity of the mobile.
8 July 2008
It’s almost exactly a year since I last felt the urge to write anything about presentations and bad powerpoint. That’s not because the world has suddenly become a better place, but because there doesn’t seem much more to say. But an experience today has reminded me just how critically important this stuff is.
I arrived a bit late for an event hosted by a big technology company, so ended up sitting right at the back of the rather grand room. There was a big screen at the front, behind the speakers, and two smaller screens halfway down, one of each side of the room. That would have been fine if all the speakers had kept, say, to Guy Kawasaki’s rules:
It’s quite simple: a PowerPoint presentation should have ten slides, last no more than twenty minutes, and contain no font smaller than thirty points…
If “thirty points,” is too dogmatic, the I offer you an algorithm: find out the age of the oldest person in your audience and divide it by two. That’s your optimal font size.
But they didn’t. One of the speakers in particular used slides with quite a lot of words. That much I could see. But I couldn’t actually read a single word on any of them. Even that might have been just about bearable – until his final slide, which consisted of a quotation from someone or other (I couldn’t read that either). His grand peroration consisted of introducing the quotation – and then leaving us all to read it for ourselves. Or, of course, not.
Not all of that was the speaker’s fault. The screens wouldn’t have been brilliant even for better slides. And maybe nobody had told him about the size and layout of the room. And on top of all that, the acoustics didn’t favour his voice, and the sound system wasn’t good enough to compensate.
None of that would have been nearly as much of a problem if his slides had been simpler, more dramatic – if they had, quite literally shown the bigger picture. Or, of course, if somebody had paused to think about how the slides would work in the environment in which they were going to be used.
Rules are definitely made to be broken. I admire the bravura performers who can fit dozens of slides into quite short presentations – this one by Dick Hardt remains one my favourites (bear with it until you get past the guy in the shirt) which is modelled, as he acknowledges, after a style awesomely deployed by Lawrence Lessig – so I certainly don’t agree with any of the x slides in y minutes rules. But as with anything else, mastering the technique is an essential precursor to breaking rules.