Friday, September 16, 2011

IT's added value

Amidst the bearish sentiments and imminent collapse of the Greek economy it's a good moment to reflect on the business case of the IT department. The good news: the IT department is definetly necessary. The obvious news: it has to demonstrate more added value to its company. The bad news: IT guys are not good at demonstrating value, because they're necessary. Performing at SLA levels is not added value, that's the agreed minimum. Yet soon enough some nice finance guys come knocking with cutback plans, and a requirement to show one's worth.

The IT department does have a strong process orientation in common with its financial department brothers, and a penchant for metrics and service level agreements. However, this will not be enough to prevent the finance guys from pulling a Cain on many IT department jobs when it's crunch time.
Factor in aggressive competition between IT service providers, outsourcing and offshoring alternatives, and cloud computing, and before you know it the entire department is Abel. Doom scenario? Of course.

So, plan V for value. The hidden power of an IT department is continuous service improvement. One clear way to demonstrate value is to provide clear metrics on the improvement of IT operations over time. If the department can show it delivers better warranty and utility to the business than last year, preferably saving dollars at the same time, it's a good show. I'll briefly discuss seven areas to measure and report on. For all of these area's, the real power is showing the trends over time.

People, IT operators
People are the core asset of the department, not computers, systems or processes. Measure their performance and show it front and center. Not only is it the most important data, but it's the easiest to relate to for the alpha-educated. People-related KPIs show not just how well employees perform, but displays the vital statistics on the department itself, its health and wellbeing. Overtime and sick days are writings on the wall to experienced executives.

Users
Our dear problems between keyboard and chair are of course second in line to be measured and reported on. User-related stats and KPIs show the value of the department to the business like nothing else. Basically, it shows how much work the department makes possible elsewhere. Show how many users to a single operator. Show how fast a new user is up and running with a new computer, phone, account, software etc. Show the department as an enabler.

Partners and suppliers
If you have mastered the exact art and subtle science of multi-vendor contracts, devote time to playing them off against each other. The opportunity is to create competition for quality, and the pitfall is to allow competition for price. Therefore measure and report on your partners and suppliers the same way you do on your own department, focus on the people and the quality of work delivered, focus on the value to the business. Make no bones about who gets more pie when it's time to re-negotiate those contracts.

Incidents
Incident metrics are real measures of quality for an IT department. For an operational report on the department level, this data would be perhaps second in line. When it comes to reporting added value, it's a bit lower in the list but therefore no less important. The business wants to know its uptime, its operational risks and the speed and efficacy of its solution mechanisms. Incidents per timeframe or average resolution time alone are not going to cut it. Showing a breakdown of self-service desk resolved incidents versus operator-resolved incidents and their respective severity, now that is showing why they pay the big bucks. A short story on the emergence and resolution of a major incident does not go amiss either.

Changes
Change management is absolutely vital, but the related data is almost unbearably boring. Standard changes / request fulfillment can be reported on by average time to fulfillment and closure. Non-standard changes are mainly interesting for how well they are executed. Therefore report the numbers of change-related incidents, and if it's a tight ship then the trend should slant to the lower right. A well done report harks back to the section on users: show the department as a business enabler by providing insight into the business changes enabled by IT Change management.

Configurations
Now, and only now, do we get around to the original raison d'ĂȘtre of this whole ITSM bit: the computers and other configurations. Lots of things to measure, lots of ways to demonstrate value. Show the reliability of the standard configurations, their total cost of ownership. Show how much discount was brokered in volume agreements with large manufacturers. Keep in mind that this stuff is really boring. Internally, maintain comprehensive data on the fleet, the stock and the various costs. Externally report the basics. Configuration-related metrics are important for driving operational excellence. Stock-keeping and supply chain management are arcane sciences of their own, which IT guys may underestimate. The larger your operation, the bigger the issue. Smart contracts can keep the onus largely on the supplier side.

Miscellaneous metrics
There are many more things to be reported on, according to preference. Some nice forms of report padding are storage size / cost versus utilization and my personal favorite: e-mail. Mail volume / related incidents / cost of mail solution per user / per e-mail are all eminently relatable for decision makers. Odds are the report was received by mail and is acted on with e-mails, so it's an appropriate and ironic salute to the IT-dependency of the company.

PR Metrics
Last and least are what I call PR-metrics such as customer satisfaction. I am gleefully swearing in church by saying this. If all the above measures are signs of a well-functioning IT department, then this happy user stuff should be completely superfluous. Conversely, of there is trouble in any of the above areas the customer satisfaction will be adversely affected.
Measuring customer satisfaction as your primary KPI is like measuring how close you are to port, while neglecting to measure your course, speed and prevailing weather. Sure, it tells you how well you are doing, but does not tell you a blessed thing about how to get better. Therefore measure and report all the rest first.


The bottom line
Never forget who holds the wallet and show your results to that guy. Everyone else is just filling a chair. It's not important how well you or anyone else thinks you do, it's important how well he thinks you do.
Secondly, don't think for a minute that this is too much paperwork. On good days, this paperwork enables operational excellence. In the bad days ahead, it's survival.
The bottom line of any executive decision is cost versus (expected) profit. Know your cost, show your profit.

Monday, September 12, 2011

The call of the cyber sirens

The promise of new technology can be a real siren's call to an IT department. New windows versions, new storage systems, a better process implementation can seem so incredibly good that the business case seems to write itself. Then comes the implementation project, and reality serves cold coffee for breakfast.

Jim Collins (Good to Great) once likened smooth operations and steady improvements in great organizations to a flywheel. It turns steadily, and with steady increments it can go faster and faster. Great results come from evolution.

Similarly, the most important function of an IT department is to support the central process of the company by ensuring the utility and warranty of its IT assets. In short, keep it ticking over nicely. Improvements are always welcomed, revolutions are not.

The cloud, (internal) social networks, Bring Your Own Device (BYOD) support are al great phenomena with a lot of promise, but at the same time they are silly fads that have yet to prove that they can add to your company's bottom line. Yes, they add billions to Apple and facebook, but the odds are your company has a different focus altogether. Readiness for 2015 does not mean equipping all employees with telepathy chips over the next six months.

Once in a while the sales pitch is just too good and an IT department saddles up, corrals some consultants, and goes to work delivering tomorrow, today. Research says at least 30% of IT projects fail, costing businesses green stuff without ever delivering on their promise. Some estimate as many as 68% of IT projects are doomed. Most participants are even aware of this, never expecting a hyped up IT project to succeed at all. What happened there? Lack of ambition? Steady improvements? Nope. Finance forgot to tie Odysseus with a nice tight budget.

On the other hand there are a lot of businesses doing amazing things with technology, step by step bypassing their impulsive competitors to reach a level of sophistication beyond any of them. The aforementioned book contains examples of companies progressing steadily from old-skool a-technical companies to operating their own custom satellite systems, one small step at a time, never trying to go from 0.1 to 2.0 in one giant leap.

In short, IT departments might do well to instill a motto, "we live to serve" or something humble like that, and make sure that they run the smoothest IT operation rather than the most cutting edge. DevOps, anyone?

Friday, September 9, 2011

Creeping elegance

Programmers tend to be perfectionists, and are familiar with a phenomenon known as creeping elegance. While writing code, it becomes an obsession to write it as tersely and elegantly as possible, which comes at the expense of time, readability and focus on the final product.

Projects can also experience creeping elegance, in this form an insidious variety of scope creep. Say there's a process optimization going on in a very basic environment. Little documentation, lots of informal procedures, opaque and implicit knowhow ingrained in the employees, organic structure, the works. The project brief states that the goal is to document knowledge and procedures, design and roll out a standardized process, and train employees in the new and formal way of working. The allotted time is nine months, 5 internal people are put on the project for 20% of their time, and three consultants are hired.

A year later, a cold cup of coffee overhears a debate about indentation in the new documentation template, mixed with comments on the frequency of detailed status reports. The cup quietly reflects that, just like twelve months ago, neither the template nor the report are used operationally, but once the project is done they will surely be things of awesome perfection, terrible to behold and a slap in the face of any IT department who gets by with a lesser standard.

What happened? Creeping elegance. It's very hard to decide when to stop once you're improving something. At what point has enough knowledge been documented? When is a process good enough? That's undefined. It's not like a building plan where you're done at some point, when the building stands tall and happy tenants are preparing dinner inside.

Operationally, this is easy. Continual Service Improvement (CSI) is the name of the game. Steady progress. But when improvement becomes a project, it's very hard to keep the creeping elegance genie in the bottle. The only solution is strict timelines, and a project manager with enough common sense to nip creeping elegance in the bud. When the sand runs out in the hourglass, whatever is agreed on is delivered as the final product, and operational CSI takes over. Mission accomplished.

Thursday, September 8, 2011

Oops

While updating labels on all posts I accidentally re-posted some old material as 'new' posts. My apologies. The latest post is called The keys to the Kingdom.

Cortland, and why we don't use free software all that much

1. Coding is fun
2. EULA's are a farce, people wouldn't ever buy something else under the terms & conditions attached to software
3. It's great to have your name on something good, something you made or contributed to
Ergo: create free software. Any questions?

Well, I do have a couple.
I like getting paid for my work. Lean software development is cool, getting leaner developing software is not. How to monetize?

Someone who can do something I can't, or can genuinely do it way better, is welcome to add to my code. Everyone else should bugger off. How to maintain quality & ownership of an idea / some software?
Nothing is ever truly perfect. Who is going to take care of support issues once I finish uploading this baby?
In Rand's The Fountainhead the story's famously egotistical protagonist agrees to design a housing project for free, his price is to see it erected exactly as he designed it. Subsequently, some less-that-stellar architects proceed to do a mashup job on his blueprints by adding their own 'enhancements' and the result is not at all what he intended.

What I understand is that creation can be its own reward, and when you're busy doing something great money becomes merely a means to secure the resources you need to keep creating, rather than the reward for the work you do.

What I also understand it that sometimes you got to have it your way or not at all. Especially when any alteration would detract from the whole.

In this vein of thought the GPL seems an evil thing that lets other people appropriate the fruits of your labor, mess around with them and go on the internet going all 'look what I made'. Terrible.
However, this is not the reality of free software. The reality is that great people do great things which are then made even better by other people. Why?

Because of the free market of free software leads to incredible competition and a very, very good insurance against quackery. People can't fork, edit, relabel and sell software they didn't make because they will be called on their bullshit instantly. Also, anyone who screws up your good code will not be able to distribute it as widely, because customers will favor your better product.

Now the major weakness here is technological literacy. If I'm a car mechanic such as a good friend of mine I'll buy a highly customizable car that I can trick out how I like it, and that will outperform cars many times its cost. However, if I am an average Joe I'll buy whatever requires the least maintenance, or the product that has highly available support. The same goes for software.

Once in a while I'll try a new Linux distro, feel all warm and Tuxy and nostalgically use all seven bash commands I know just to recapture the cool. However, as soon as I run in to a problem that requires me to debug spit-and-ductape solutions for playing video files or scare up obscure drivers from exotic repositories I tend to grab that OSX disk, real practical like, and restore my mac to it's rightful smooth usability. So even though I'm at least somewhat technologically literate, I tend to prefer forking over my hard-earned dough for good software, instead of free stuff that needs more attention.

Free software can and does perform flawlessly in many critical environments such as servers, but the wizards in charge of those systems are second to none in setting them up and maintaining them in such a good state. As long as your mom doesn't use free software on the house computer with the same ease as she uses any appliance, we're not where we should be in terms of usability and all our freely distributed creative efforts will see niche use at best. Granted, this is a higher standard than being merely as usable as Windows, but that was kind of the point of building something else in the first place. And given that 0900-FIX-MY-FREE-STUFF won't be answering your calls, free software cannot be the future until everyone, including mom, becomes more savvy about the stuff they (could) use.

Yes, there are seem to be some counterexamples with free browsers and such being built and working well. Their development is actually funded by multi-billion dollar corporations like AOL, Google, Microsoft and Apple. Mozilla too stays afloat on grants and cooperation agreements with the likes of Google. It is really properly organized and funded development by professionals, only the product is distributed for free. The development certainly isn't. Nothing wrong with that, but it doesn't fit in a discussion of romantic basement programming by clever peeps on a creative jag.

Recap: Making free software is fun and we should all do it for fulfillment and major kudos, using free software is at times not so great. Until we're all better hackers, free software is going to stay in its established niches. Shame.

Wednesday, September 7, 2011

The keys to the Kingdom


Privacy, or access restriction, is all about trust. Trust in the one guarding the access mechanism. The doors to heaven open but to the worthy, care of Saint Peter. A bank vault usually does not open until at least two different people get involved, each with their own key or code. The door to your home opens to yourself and perhaps one or two others you've given a key.

It's the same in the digital world. Computers and websites are accessed over connections, and these connections are vulnerable to trust-based attacks. Someone can easily get between you and your front door, so to speak. So called 'man in the middle' attacks involve a game of digital charades where a bad guy, Charlie, can spy on the exchange between Bob and Alice. Charlie simply pretends to be Bob when speaking to Alice and pretends to be Alice when speaking to Bob. Because it's all ones and zeroes, this works really well.

Now the common countermeasure is to encrypt the channel between Alice and Bob in such a way that a) Charlie has a tough time listening in and b) any attempt at impersonation is detected. This works by having a digital certificate, and some fancy non-invertible math. This approach is called the PKI system.

Basically, if I talk to you using the public key part of your digital certificate, only you understand what I'm saying. I would not get it myself if I heard it, post-encryption, but you do, because you can decode it with the private key part of your certificate. You can talk to me the same way, just look up my public key and encrypt your message with it. Only I will be able to reconstruct your original message. You're locking your message, and I have the only key.

Locks don't present much of a challenge to a locksmith. Digital locks like the PKI system I've just described have the same weakness, and this is where the trust comes in. Certificates are created by certificate authorities, and if you want to look up a public key, or verify that a sender is really the guy you think he is, you can check in with the authority. While surfing the web, your browser automatically takes care of this. The little lock symbol in your browser means that the connection is to the right entity and is secure.

This puts a lot of trust in the certificate authority. A certificate is only as safe as the authority is trusted. An evil or stupid CA could pass out your private key like candy, or lie to you about who you're connecting to, i.e. telling Alice she's listening to Bob when it's really our old pal Charlie, up to his tricks again.

The trustworthiness of authorities is the underlying issue of the recent discussion about certificates. Dutch CA Diginotar had the long arm of the Iranian secret service up where the sun doesn't shine, and was unaware of it. When they found out, they kept it mum while they were figuring out what to do. Bad idea. The only antidote to a compromised certificate or CA is to blacklist it immediately and install new, clean certificates. Anyone using the old ones is likely to have a Persian Charlie sitting in on his communications, and many did.

Diginotar is not just some two-bit CA. These guys have the keys to the Kingdom. Literally. Diginotar is one of the CA's who creates the certificates for the Dutch government, for major websites and services, and regrettably for a lot of Iranian dissidents too. They proved unworthy of this trust. The ramifications are huge. During the hack, a large number of certificates were created to compromise a wide array of websites and services. Browsers had to update their list of trusted CA's, the Dutch government stopped doing business with them (albeit late), and Diginotar's reputation is tarnished forever.  There's no knowing how much supposedly secure information has leaked while Diginotar was silent about the hack.

One benefit is that the security awareness of at least two countries was increased quite a bit. Now let's hope the government hires a better company instead of regulating the field some more. Trust is the coin of the cybersecurity realm, and it should be spent sparingly and wisely.

-

The (devastating) FoxIT report on the hack can be found here: http://www.rijksoverheid.nl/bestanden/documenten-en-publicaties/rapporten/2011/09/05/fox-it-operation-black-tulip/rapport-fox-it-operation-black-tulip-v1-0.pdf 


N.B. At the moment the site is down, probably due to the huge demand. Mirror here: http://tweakimg.net/files/upload/Operation+Black+Tulip+v1.0.pdf



Tuesday, September 6, 2011

I want my 3PO


Whatever happened to Robots? You know, humanoid metal friends that were once widely expected to be all over the place by now, but that didn't happen. Shame. I would have like to have one of those. I hate ironing.

This weekend OGD is organizing Technival, a wonderful collection of geeky and fun activities wrapped in Saturday and sunshine (or so we hope). One of the many cool things to do is fight virtual robot wars with real drones, using Parrot AR drones and iPads. However, it's still us at the controls.

There are actually quite a lot of 'robots', building cars, vacuuming houses and manipulating fuel rods. Most of these are basically automatons, with about as much interactivity as a coffee machine. Not really Asimov-grade R. Daneel Olivaw material.

In the virtual world there are also a lot of bots. Contrary to their meatspace co-inhabiting counterparts, these are highly interactive. While not quite capable of passing the Turing test, they are quite able to kill n00bs in MMORPG games, do some basic chat-based customer support on websites, and navigate virtual environments with some aplomb. Virtual bots are altogether much more sophisticated than the physical varieties. However, uploading Sansha Sleeper spawn algorithms into a Rhoomba vacuum cleaner will not produce an (evil) R2D2, not quite. The complexity gap between what we can program and what we can build is too great, and somehow programming the physical to perform at the level of the virtual bots is a lot harder.

Why the missing link? Is bipedal walking really that hard? Is there an energy-density problem preventing bots from roaming freely? Or are we just unable to program any simile of a spark into a lifeless creation?

All three are big factors leading to the dearth of robots on the streets today. Walking is pretty tough, batteries are expensive, heavy and weak, and however many cores we equip our computers with, they still lack originality. Topio, for example, moves well and plays table tennis, but is not much of a chess player, even though his hardware and processing capability could theoretically play the game. It simply wasn't built for this purpose and is unable to adapt.

Thankfully there are some promising trends. There are fairly mature navigation and collision-avoidance systems for cars, and parking-assist is gaining popularity. Ere long cars will be at least partially capable of driving themselves. The aforementioned Roomba vacuuming bot is popular and faces increasing competition, with models from Samsung, Phillips and others vying for your living room floor, and doing a good job of it. The Parrot drones at Technival are capable of hovering themselves steadily, maintaining equilibrium with their gyros and rotors, and are quite good at recognizing each other mid-flight.

One day I hope to have a metal man show up at my front door to deliver himself fresh from the factory into my service. I really do hate ironing.

--

Update: Concidentally, today's XKCD is hugely relevant: http://xkcd.com/948/

Saturday, September 3, 2011

Framework Sceptics

Once in a very great while one comes across a project or company so embroiled in paperwork that you come to suspect their managers of a fetish for red tape. However, most projects want to succeed and most companies want to make money, so it does not occur too often, and when it does the consequences for success and profit are easy to predict.

Amongst IT Service Management consultants its a bit of a sport to speak disparagingly of commonly used process frameworks like ITIL and BiSL, and project consultants in general like to make light a of full-blown PRINCE2 approach to projects. It would be very amusing for a physicist to essay the shortcomings of relativity, a composer to mock music theory or an accountant to say that the GAAP standard takes itself too seriously, but they don't tend to do that. These people realize that you don't have to involve Einstein's entire work every time you conduct an experiment with time and space, not all notes need be in a composition, and not all rules apply to all ledgers at all times. They go quietly about the business of being good at whatever they're good at.

Not the IT consultant. You're not a man to be reckoned with until you've stated that this or that framework is far too complex, never mind that you've never worked for an organization large or complex enough to utilize it. Project leaders are even worse. Depending on whom you ask, well over half or, according to some, more that three quarters of projects end in failure. Failure. Not on time, not within budget, and certainly not delivering the desired result. In my (albeit limited) experience, most projects suffer from a lack of preparation and an excess of unstructured work. I've yet to encounter the truly pointless progress report, the completely useless review meeting or the irrelevant risk register. Neither have I seen an IT department run so well that it cannot benefit from a little process optimization and a smidgeon more compliance to standards.

The criticism that a framework is too complex is irrelevant as long as it is not the stated purpose of that framework to be implemented down to the last detail. Scalability is at the heart of all frameworks I mentioned. The criticism that compliance comes at the price of agility and efficiency is a non-sequitur. It's true, but compliance is still worth the warranty it gives.

BP can tell you, they were not broken by over-compliance to a set of rules and regulations. One or two executives might even admit that a few more inspections and reviews would not have gone amiss. Similarly, I doubt a CIO was ever berated by his CFO for wasting money on process compliance. Rather a lot of them have had complaints about budget overruns on their projects, poor knowledge and asset management and opaque departments, I daresay. A little bit of common wisdom, also known as a best-practice based framework would not hurt to make sure the same comments don't come up during the next review.

Rather than proving the mastery of your trade by critiquing the work of others (which is used to great effect by many on a daily basis), contribute to the improvement of your tools and help to make the frameworks as slim and efficient as they can be. It's not like ITSM is an academic discipline (as opposed to, say, aerospace engineering).