Standards fascinate me. One of the most problematic standard in use almost universally today is the kilogram (kg). The problem is that no one really knows exactly how much mass a kilogram actually has. By extension that means that no one knows how heavy a pound is either since the US government defines it in relationship to the SI kg unit.
Originally the metric system was supposed to be defined in terms of “natural laws” that the common man could measure for himself. The kg was originally defined as a cubic decimeter of water under certain conditions. This is probably what you were taught in school, one of many metric misconceptions (see why everything you know about the metric system is wrong).
But that approach was jettisoned as impractical due to variations in water density, temperature, etc. In 1889 the standard became defined by a set of “physical prototypes” that were manufactured and distributed to major countries. So what was a standard based on “natural laws” became based on an arbitrary hunk of platinum and iridium.
Only that has not worked either (at least not to the number of significant digits desired). The problem is that the different physical prototypes are changing mass by a small but measurable amount. So today there is effectively no precise consistent definition of a kilogram, and thus by extension the pound.
The plan going forwards is to define the kg in terms of basic physical properties, similar to what has been done with the meter and the second. But for now, kg is only an estimate for given levels of precision.
Now that SCIM 1.0 is final and SCIM 2.0 is starting I wanted to share my thoughts. First here is what I like about SCIM:
- SCIM defined a standard schema in 1.0. I wish SPML had done the same. Not doing so was one of the biggest mistakes we made.
- SCIM supports filtered and paged searches. That’s a must have in my book.
- SCIM supports multi-value attributes with the proper modification semantics. You be surprised how many Identity APIs I have seen that don’t get the modification semantics right.
- SCIM only did what it needed to do, nothing more.
So what don’t I like about SCIM? I don’t really care about the REST vs SOAP aspect. It’s not going to be widely used unless it’s wrapped in an API or toolset. So that’s a moot point. So I can’t really think of anything I don’t like.
But will SCIM be accepted where SPML was not? I don’t know, but I think there is a decent chance. I think announcing the IETF SCIM 2.0 effort so soon may be mistake as it may convince people to just ignore it until 2.0 comes out.
But ultimately the proof of standards is in adoption. For it to succeed it has to be both adopted by the cloud providers as a service and by IT as a client. Each of them wants the other to go first.
My biggest question is will the backers of SCIM implement it in their main product lines. Will SalesForce.com stand up a SCIM provisioning service? Will PingIdentity then add SCIM support to their SalesForce.com offering? We shall see.
Jackson Shaw has some great points to make about it here, but I didn’t really get the parrot reference. He points to this article about SCIM which also makes some great points.
The new RESTful provisioning standard, SCIM, is being discussed a lot recently in comparison to SPML. Dave Kearns has some interesting thoughts here.
While Dave makes some good points I think he is entirely missing the reason the SPML was never accepted. SPML never gained traction because enterprises and application vendors never adopted it. It didn’t matter whether the provisioning vendors supported it or not, and it won’t matter if provisioning vendors adopt SCIM or ignore it.
Enterprises and service providers drive adoption. The ISVs will meet their needs. If SPML or SCIM is demanded, it will be provided. That demand never materialized for SPML, partly because the provisioning vendors already had non-standard solutions for the problems SPML was intended to solve.
Will this demand materialize for SCIM? Time will tell the tale of these two standards.
Nishant Kaushik has a great (and funny) slide deck on federated provisioning on his blog. He discusses some distinctions between two flavors of federated provisioning, the Just-in-time (JIT) and what he terms advanced provisioning (often referred to as bulk provisioning).
I would like to clarify a couple of points in his presentation, however. He talks about a possible SAML profile of SPML for JIT provisioning. There was already an effort (which I lead) to define a SAML profile of SPML in Project Liberty (most of the work has already been done if anyone wants to revive it). But this was not for JIT provisioning as there is really no need for SPML when doing JIT provisioning. JIT provisioning can be done by SAML alone (or OpenID+other stuff). Rather the SAML profile of SPML was intended for advanced (bulk) provisioning. While the DSML profile could be used for advanced provisioning the Liberty TEG felt that using the SAML attributes assertions as the provisioning data structure was a better fit for advance provisioning accounts that would later be used in a SAML sign-on.
Me, I see it six one way, half dozen the other.
Another point the Nishant brought up is the need for the equivalent of the SAML attribute query in SPML. That already exists in the form of the SPML Search query which supports getting a single user record and asking for a subset of attributes.
When discussion whether JIT or advanced provisioning is appropriate, the points that are usually brought up are auditing, de-provisioning, and billing. But Nishant overlooks the most important point:
Do the identities have any meaning outside of authentication?
If the answer for your service is yes, than JIT provisioning is likely not an option.This is not a case of “tightly coupled” versus “loosely coupled”. Rather it is a matter of business requirements.
Take my current employer CareMedic (now part of Ingenix). We have a suite of medical revenue cycle management web apps that we host as SaaS. We need to know the identities of our customer users for the purposes of workflow before the user actually logs into the system.
Of course there are plenty of apps where the business requirements make JIT provisioning ideal. But it still comes down to the business requirements, not the technical architecture or standards.
Posted in Cloud computing, Identity, Identity Management, OpenID, Provisioning, SaaS, SAML, SPML, Standards
Tagged Federated Provisioning, Identity, Identity Management, SAML, SPML, Standards
OK I kid about the 3D, but I am starting to hear from various identity folks that it’s time to start thinking about SPML 3.0. The latest is John Fontana’s post on that here.
While I don’t think that there are any technical reasons SPML 2.0 can’t be used for interoperable provisioning, the market has clearly not embraced it yet. There are some SPML enabled products out there, but not nearly enough to reach the critical mass that is needed.
So would an SPML 3.0 effort succeed where SPML 2.0 has so far not succeeded? I honestly can’t say, but I feel it’s worth giving it a go. The industry really needs this. My employers products need it.
I like skeptics. I like to consider myself one. I also thoroughly enjoy reading the IT Skeptic. But this borders on pure fantasy:
Then there is the question of the pace at which this beast is moving. Although the document referenced here is dated October 2008 the changelog ends in January 2008, and it is certainly the only output we have seen this year other than one(?) multi-vendor demo. There are zero commitments from DMTF or from the vendors for any sort of timeline for delivery of anything. As I have pointed out in the past,
“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availablity monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely. “
All I could think of when I read this was “If only”. If only the vendors cared enough about interoperability standards to make it a selling point. Then you might eventually get real interoperability, even if it started as vaporware.
But the reality is the front line sales guys usually don’t know or care about standards, past checking boxes in an RFC. William Vambenepe sum’s it up nicely in this rebuttal:
Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.
Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.
I don’t mean this to be critical of the sales guys. They care (as they should) about the requirements the customers care about. Until the customers start making support for interoperability standards like CMDBf (in the ITSM space) and SPML (in the IdM space), these standards will never get robust implementation. And the customer will continue to get stuck with siloed solutions.
XKCD has put out this great summary of metric units. While the comic is great fun (I especially like all the shiny Firefly/Serenity references), it has regrettably set off a round of bashing the US for clinging to the English system while the rest of the presumably more enlightened world uses the Metric system.
While I agree that the metric system is superior, I find many of the arguments put forth for switching to be specious at best. The most popular of these is the all time poster-boy of why we should use the metric system, the loss of the Mars Climate Orbiter in 1999. In that case the Mars Climate Orbiter was lost because some of the telemetry data was delivered by an outside contractor in English but the orbiter software was expecting metric.
I’m sorry but this is a very unconvincing argument. First of all, just because metric would be better for use in a satellite guidance system is no reason that I have to buy salsa measured in grams (for the record the jar of Tostitos Salsa I just bought comes marked in both English and Metric, 24 oz and 680g respectively). In fact in college I did all my engineering work in Metric and then lived my non-academic life in English. It’s really not that big a deal.
Second, software errors caused by unit mismatch can happen even in a consistently all metric environment. For instance in the Mars Climate Probe case the results would have been just as catastrophic if data that was expected to be in Meters was delivered in Kilometers. One common software error that I have seen repeatedly over my career is using local time instead of GMT time.
Third, there is very little advantage in switching but it would be hugely expensive.
But here is my dirty secret: I love that the US uses the English system. Not because it’s better (it’s not), but because it represents a libertarian philosophy. Rather than the government forcing everyone to use one system of measure, the choice is left to the consumers. If they decide they want the metric system, they will force the manufacturers to use it. So far, when offered the choice, American consumers have collectively decided that we should keep the English system. I don’t see that changing any time soon.
BTW, I also wrote about the myths and misconceptions of the Metric system here.
Last year I posted this comparing different types of screws. Paul Madsen chimed in expressing his geographical preference for the Robertson (square drive) screw. While I could pontificate at length about the inherent advantages of a hexagonal drive technology used in Allen head screws versus the equilateral rectilinear drive technology used in Robertson screws, it would be counter-productive. Clearly the industry is hopelessly fractured by multiple redundant screw standards. This is not only inhibiting the transition from Screw 1.0 technologies (such as Phillips) to Screw 2.0 technologies (such as Robertson and Allen), it is also preventing the development of Screw 3.0 (also known as the Semantic Screw).
Well we all know what is needed in a case like this, a new standard to integrate the different Screw technologies. Thus I am drafting a proposal to the Proposing Organization for New Devices (POND) a new Tool Committee (TC) to study this. The new standard will be named the Work Shop Screw Interoperability Technology (WS-ScrewIT). I humbly submit Paul and myself as Co-chairs.
I know you should never jump to the solution before the requirements are well known, but I couldn’t help working on this. I even have a prototype of the technology. It looks something like this:
William Vambenepe has some keen observations about requirements here in this post about Cloud computing:
There are three types of user requirements. The Animoto use case is clearly not in the first category but I am not convinced it’s in the third one either.
- The “pulled out of thin air” requirements that someone makes up on the fly to justify a feature that they’ve already decided needs to be there. Most frequently encountered in standards working groups.
- The “it happened” requirements that assumes that because something happened sometimes somewhere it needs to be supported all the time everywhere.
- The “it makes business sense” requirements that include a cost-value analysis. The kind that comes not from asking “would you like this” to a customer but rather “how much more would you pay for this” or “what other feature would you trade for this”.
When cloud computing succeeds (i.e. when you stop hearing about it all the time and, hopefully, we go back to calling it “utility computing”), it will be because the third category of requirements will have been identified and met. Best exemplified by the attitude of Tarus (from OpenNMS) in the latest Redmonk podcast (paraphrased): sure we’ll customize OpenNMS for cloud environments; as soon as someone pays us to do it.
I can absolutely attest to point number one as it pertains to standards groups. But its point number three that I wanted to highlight as it relates to a theme I have been discussing a lot lately. Namely that IdM is messy because enterprise software vendors in general won’t externalize identity in their products beyond AD authentication.
Now I am not implying that enterprise software vendors are lazy. Rather it’s a matter of priorities. Enterprise software vendors typically have a backlog of feature requests and fixes that they need to work on. The ones that they get asked for the most, or that they feel will give them competitive advantage, that will get the priority.
Like William says, it’s not whether the customer wants a feature, but how much are they willing to pay for it and what other features would they give up in exchange.
Dave Kearns believes that if there is an IdM roadmap laid down, vendors that implement it will “reap the rewards” and those that don’t will be destined for “where are they now”. Perhaps Dave is right. But history shows us quite the opposite. Look at strong authentication for example. Despite dramatic reductions in cost and increased options, despite all the experts’ advice, and the presence of a solid roadmap, the vast majority of authentication in enterprises is password-based. And very little enterprise software supports strong authentication out-of-the-box.
So what will it take to spur enterprise vendors to support externalized identity? I really don’t know. Yet.
Posted in Authentication, Identity, Identity Management, OATH, Password Management, Provisioning, Security, Software, Standards
Tagged Identity, Identity Management, OATH, Software Requirements, Standards, Strong Authentication
William Vambenepe has this great travelers guide to the various standards bodies. I have been invited to stay at the OASIS Housing Development many times. I’m pretty sure I was the Village Idiot.
I was disappointed not to see a review of the Liberty Travel Trailers.