Monthly Archives: January 2009

One more for the deprovisioning files

I have heard several analysts recently say that the current business climate makes IdM (or at least provisioning and access control) more important than ever. It would be easy to dismiss this as wishful think until you read something like this:

A former Fannie Mae IT contractor has been indicted for planting a virus that would have nuked the mortgage agency’s computers, caused millions of dollars in damages and even shut down operations. How’d this happen? The contractor was terminated, but his server privileges were not.

That not to say there aren’t disgruntled employees during a normal business cycle. But the more people that are being laid off, the greater the risk.

How many organizations really have a handle on properly turning off the access for all the people they are laying off (especially those in IT)? Of  course if the IdM systems are not out in place during an up business climate it’s going to be much harder to put it in place in a down one.

Asymmetric Risk, Malpractice Insurance, and Personal Oxen

Bruce Schneier has two very interesting posts on his blog that stand out (to me at least) by their proximity to each other. Most recently Bruce has this to say about the recent financial meltdown:

The most interesting part explains how the incentives for traders encouraged them to take asymmetric risks: trade-offs that would work out well 99% of the time but fail catastrophically the remaining 1%. So of course, this is exactly what happened.

But three posts earlier Bruce has this to say about software vendors:

So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors — BitArmor will refund the purchase price.

Bottom line: PR gimmick, nothing more.

Yes, I think that software vendors need to accept liability for their products, and that we won’t see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won’t happen without the insurance companies; that’s the industry that knows how to buy and sell liability.

Talk about asymmetric risk. If software vendors accepted liability (or even partial liability) for anything that might happen as a result of their product, who in their right mind would ever go into the business? The problem is that liability is open-ended while the profit on each deal is not. It would nuts for any vendor to take such an asymmetric risk. It would be like an MD practicing medicine without malpractice insurance.

Which is, as Bruce alludes, how any such liability would ultimately by acceptable. Software vendors would buy liability insurance to protect themselves in the event that they are ever found at fault. Like malpractice insurances this would pool the risk and spread it over all the software vendors.

Which in the end eliminates any real incentive to avoid the mistakes to begin with. Sure the premiums would increase if found at fault, but just as with malpractice insurance the pain would be diluted by eventually raising every ones rates. And everyone would just price the rate increase into their business model exactly like the medical community does today. In the end it won’t really be the vendors money or risk.

And that’s what it really boils down to in the end. It’s a matter of exactly whose ox is getting gored. You notice that they only people suggesting that software vendors be held liable or otherwise punished for defects are not themselves producing software products. I have never seen a zero defect advocate that could actually deliver zero defect software.

SPML gateways move forwards

Mark Diodati points out this interesting open source SPML gateway. There is an accompanying blog by Jerry Waldorf of Sun that has a lot of background on the project and presents some interesting concepts of things that you could do with an SPML gateway.

This is exactly the kinds of stuff I had hoped to see happen when I started working on SPML. Working at Access360 it was frustrating to see so much time and effort spent writing connectors to all the disparate systems that needed to be provisioned. If only project keychain had existed back then.

Great stuff if you are interested in provisioning.

If only, if only

I like skeptics. I like to consider myself one. I also thoroughly enjoy reading the IT Skeptic. But this borders on pure fantasy:

Then there is the question of the pace at which this beast is moving. Although the document referenced here is dated October 2008 the changelog ends in January 2008, and it is certainly the only output we have seen this year other than one(?) multi-vendor demo. There are zero commitments from DMTF or from the vendors for any sort of timeline for delivery of anything. As I have pointed out in the past,

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availablity monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely. “

All I could think of when I read this was “If only”. If only the vendors cared enough about interoperability standards to make it a selling point. Then you might eventually get real interoperability, even if it started as vaporware.

But the reality is the front line sales guys usually don’t know or care about standards, past checking boxes in an RFC. William Vambenepe sum’s it up nicely in this rebuttal:

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

I don’t mean this to be critical of the sales guys. They care (as they should) about the requirements the customers care about. Until the customers start making support for interoperability standards like CMDBf (in the ITSM space) and SPML (in the IdM space), these standards will never get robust implementation. And the customer will continue to get stuck with siloed solutions.

Reconcilable Differences

Having worked in both IdM and ITSM I am constantly struck by the similarities and how the same problems get reworked  over and over again in the two industries. For instance William Vambenepe has this to say about configuration item reconciliation:

Whether you call it a CMDB or some other name, any repository of IT model elements has the problem of establishing whether two entities are the same or not.

Which is exactly the same account reconciliation problem that provisioning vendors have struggled with for years. When a provisioning system discovers a Linux account with user ID jbohren, does it belong to me, or my father Joe Bohren? BTW, if you email to my first initial and last name at, it won’t reach me. If you do the same at it will.

It’s also the same problem that role management software is dealing with when trying to determine if roles in different systems represent the same logical business duty. Does the role named Accounting Manager represent the manager of the accounting department or is it the IT guy who manages the account software system?

Reconciliation is a big scalability problem in IdM and ITSM systems. Often there are too many orphaned items (items that can not be unambiguously matched to a known entity) for the IT staff to handle. Also determining what to do with orphaned items can be very difficult.

One interesting approach account to reconciliation is to let the account owners adopt the orphaned accounts. The adoption process would involve the owner provider the credentials to log into the account in a web page. If the system can verify that those are the correct credentials then, then that person is assumed (or allowed) to be the owner.

But this approach only works with accounts and account based systems. For now reconciling other orphaned items is still mostly a manual process. I would be curious to hear about solutions that other people have found for various reconciliation problems.

Talking SPML

Oddly enough the New Year has seen a spate of SPML discussions. James McGovern gets the whole thing kicked off here. Jackson Shaw adds his thoughts here, and makes the point that SaaS really needs federation and provisioning to work well.

Mark Diodotti (who has been following SPML for a long time) has some interesting thoughts about it here. Mark points out that SPML lacks built in authn and authz capabilities. This was an intentional design decision in both SPML 1.0 and 2.0 as it was felt at the time that authn and authz should be part of the web services infrastructure, not the provisioning standard. In retrospect that decision put too much faith in how well authn and authz standards would be adopted. This also points out the unique position that identity web services are in. They must be secured yet they must drive the security as well. It’s a real chicken-egg dilemma. Or to use the WSDM nomenclature, a real MUWS-MOWS dilemma.

Ian Glazer (a former colleague of mine at Access360 and who also served with me on the PSTC) wants to stop talking about federated provisioning. Ian makes the point that federated provisioning is not really any different than enterprise provisioning. Ian is correct in that they are basically the same, although there are some subtle differences in how they play out in deployment.

I really hope that these discussions lead to some real movement around leveraging SPML to enable SaaS services. I am always up for an SPML conversion. If you want to discuss SPML (or identity or change management), my work email is my first initial and last name at and my personal email is the same at

Much ado about metric

XKCD has put out this great summary of metric units. While the comic is great fun (I especially like all the shiny Firefly/Serenity references), it has regrettably set off a round of bashing the US for clinging to the English system while the rest of the presumably more enlightened world uses the Metric system.

While I agree that the metric system is superior, I find many of the arguments put forth for switching to be specious at best. The most popular of these is the all time poster-boy of why we should use the metric system, the loss of the Mars Climate Orbiter in 1999. In that case the Mars Climate Orbiter was lost because some of the telemetry data was delivered by an outside contractor in English but the orbiter software was expecting metric.

I’m sorry but this is a very unconvincing argument.  First of all, just because metric would be better for use in a satellite guidance system is no reason that I have to buy salsa measured in grams  (for the record the jar of Tostitos Salsa I just bought comes marked in both English and Metric, 24 oz and 680g respectively). In fact in college I did all my engineering work in Metric and then lived my non-academic life in English. It’s really not that big a deal.

Second, software errors caused by unit mismatch can happen even in a consistently all metric environment. For instance in the Mars Climate Probe case the results would have been just as catastrophic if data that was expected to be in Meters was delivered in Kilometers. One common software error that I have seen repeatedly over my career is using local time instead of GMT time.

Third, there is very little advantage in switching but it would be hugely expensive.

But here is my dirty secret: I love that the US uses the English system. Not because it’s better (it’s not), but because it represents a libertarian philosophy. Rather than the government forcing everyone to use one system of measure, the choice is left to the consumers. If they decide they want the metric system, they will force the manufacturers to use it. So far, when offered the choice, American consumers have collectively decided that we should keep the English system. I don’t see that changing any time soon.

BTW, I also wrote about the myths and misconceptions of the Metric system here.

What is an engineer?

Phil Windley has this interest post about the cost of bulk cold storage. He relates this definition of engineering:

I often say, quoting Pat Taylor, one of my professors in my undergraduates days in metallurgical engineering, that an engineer is some one who can do for a dollar what any fool could do for two. Of course, building performant, efficient code is part of this, but so is understanding the cost of bulk storage and other resources and using that in the trade-off.

I love that definition.

I often quote the engineering definition of Professor Gale Neville Jr., my thesis advisor and mentor back at the U of Florida Aerospace Engineering department. He defined an engineer as “someone who measures with a micrometer, marks with a piece of chalk, and cuts with an ax”.

Of course these two professors are really describing two different facets of engineering; efficiency improvement and design. Both are critical in software development.

When the police start behaving the criminals

Repressive regimes have always posed this question to the faces under their jack-boot clad feet:

If you are innocent, what do you have to fear?

Think about that for a moment and then consider this revelation in the UK Times Online:

THE Home Office has quietly adopted a new plan to allow police across Britain routinely to hack into people’s personal computers without a warrant.

The move, which follows a decision by the European Union’s council of ministers in Brussels, has angered civil liberties groups and opposition MPs. They described it as a sinister extension of the surveillance state which drives “a coach and horses” through privacy laws.

The hacking is known as “remote searching”. It allows police or MI5 officers who may be hundreds of miles away to examine covertly the hard drive of someone’s PC at his home, office or hotel room.

Material gathered in this way includes the content of all e-mails, web-browsing habits and instant messaging.

Under the Brussels edict, police across the EU have been given the green light to expand the implementation of a rarely used power involving warrantless intrusive surveillance of private property. The strategy will allow French, German and other EU forces to ask British officers to hack into someone’s UK computer and pass over any material gleaned.

This article focuses on the UK aspect, but it’s hard to see how it would be materially any different in any of EU member countries. That is to say those countries that have unwisely decided to subject their citizens the whims of unelected officials in Brussels.

The methods of “remote searching” sound rather like what the bad guys do:

Richard Clayton, a researcher at Cambridge University’s computer laboratory, said that remote searches had been possible since 1994, although they were very rare. An amendment to the Computer Misuse Act 1990 made hacking legal if it was authorised and carried out by the state.

He said the authorities could break into a suspect’s home or office and insert a “key-logging” device into an individual’s computer. This would collect and, if necessary, transmit details of all the suspect’s keystrokes. “It’s just like putting a secret camera in someone’s living room,” he said.

Police might also send an e-mail to a suspect’s computer. The message would include an attachment that contained a virus or “malware”. If the attachment was opened, the remote search facility would be covertly activated. Alternatively, police could park outside a suspect’s home and hack into his or her hard drive using the wireless network.

When the police start behaving the criminals, then you have a very serious problem.

A different view on OpenID branding

Nico Popp has his new year’s wishes for OpenID here. There are a lot of good suggestions, but there is one I would be beg to differ with:

Everyone agrees that OpenID needs to emerge as a brand that consumers can recognize.

Clearly Nico’s definition of “Everyone” is slightly different from mine. At the very minimum it doesn’t include me. But putting semantics aside Nico continues:

Similarly to Visa for payment, Dolby for music and Gore-Tex for rainwear, OpenID ought to become the “ingredient brand” for identity. The reason the OpenID brand needs to emerge is that we need a “network mark” that transcends all the identity silos. Very much like consumers know that their bank card will work when they see the Cirrus network logo on an ATM machine, consumers need to know that their identity will work on a Web site that carries the OpenID network logo. A network mark has a simple yet powerful meaning. It does not matter whether the card is from Bank of America, Wells Fargo or WAMU, it just works with this ATM machine. It does not matter whether the identity is from Google, Yahoo! or MySpace, it just works with this Web site.

In the OpenID brand lies the one big problem. Although a strong OpenID brand will prove to be good for everyone in the long run (by creating ubiquitous interoperability, Visa helped card issuing banks make more money than they would made on their own), at this time, none of the large consumer companies involved in the OpenID foundation have any incentive to promote another brand than their own. Therefore, the foundation needs to create a forcing function. My recommendation would be to leverage its ownership of the OpenID intellectual property to enforce the network mark. Let us keep OpenID free to all, but let us require everyone who uses the technology and benefit from the free IP to display the OpenID logo.

I don’t think this is a very promising strategy. Rather than OpenID being branded, I believe the important branding is the Identity providers that would enable OpenID. In other words the brand should be Yahoo, Google, and other big identity providers, not OpenID. In the same way the brand the Facebook users care about is Facebook, not Facebook Connect.

Trying to push the OpenID branding above the identity provider branding will inhibit OpenID adoption, not enhance it.  You are then asking identity providers to do something not in thier own best interest.

The average user doesn’t care about OpenID. What they care about (if they care about such things  at all) is that by using OpenID they can use the identity provider they already have a relationship with to explore new and interesting services that would automatically know who they are, without them having to register at every page.

The comparison to Visa is a bit off the mark. People care about Visa because it is an enabling service. OpenID is not. It is a means by which an identity provider becomes an enabling service.

Just my two cents.