I recently saw two polar opposite recommendations; one from Jeff Atwood begging you to not write code; and one from Radovan Semančík suggesting that the only practical software to use is open source software that you can fix as needed.
Obviously Radovan’s approach is not a scalable one. While there are a lot of terrible software products out there, especially in the enterprise space, there are also a lot of good ones that just work. Limiting yourself to coding solutions is a waste of time that most companies won’t pay for. Also Radovan’s solution limits you to open source solutions implemented in a language you are familiar with.
At the same time there are some problems that just need a coding solution, or are best solved that way.
For enterprise solution I am going to thread the path between Jeff’s Scylla and Radovan’s Charybdis by posing these questions:
- How much coding should be expected to implement an enterprise solution?
- How can you find enterprise solutions the works well enough you don’t need the source code or extensive customizations?
An enterprise solution that requires you to write code or scripts to do basic functionality is not well designed, in my opinion. Coding or scripting should only be required wheen the functionality needed is unique to a specific deployment (or too uncommon enough to be a main feature of the product). This is a core philosophy at OptimalIdM as well. Although the VIS virtual directory does support .NET plug-ins, most of our customers never need one. When we have seen the need for plug-ins in the past we looked for a common feature that could be added to the product.
So not having to write code one measure an enterprise solution’s quality. Here are some others:
Ease of install – they say you only get one chance to make a good first impression and install time is it for enterprise software. If your vendor is telling you that you need consulting hours just to install the software, it’s not going to get better from there.
Ease of use – requiring training to use enterprise software is a bad sign. Did you have to have training to use your browser or word processor? Enterprise software should be like that.
Stability – once installed and configured the software should just work. Baby-sitting should not be required. And if you really need two weeks of work or the source code to figure out why your solution stopped working, you made a poor vendor choice.
So go ahead and write code, but only when you have to.
Mark Diodati sums up the recent SPML threads here. But one questions that needs to be answered, if not SPML then what? One alternative that has been put forward by Mark Diodati, Mark Wilcox, and others is the LDAP (or DSML) pull model of provisioning.
This model is to expose your user accounts via LDAP using a Virtual Directory (VD) instance exposed to your service provider. The service provider would periodically make calls to the VD to look for account CRUD operations.
There are several compelling advantages to this model;
- LDAP is already a standard protocol
- There are defacto standard schemas (the most common of which is the standard AD account)
- This is really just an extension of a model that has already been embraced in the enterprise (look at how many apps can be AD enabled)
Could that be it? Is the solution to service provider provisioning really this simple? No, at least not without SAML. While this model shows promise there is a problem; passwords. If your enterprise is not ready to use SAML to authenticate to your service provider, then you are left with two choices; both unpleasant.
First you could just punt on passwords and force your users to manage their passwords on their own. This is no worse than the situation without any provisioning, but certainly not where you could be if you used a provisioning solution to push the passwords out to the service provider as needed.
The second is to expose your password hashes via your VD. If your service provider supports the same salting and hashing algorithms, then the passwords could be synchronized by copying the hash across. In fact the recent version of the Google apps dir sync utility claims to be able to do just that.
But think about this for a moment. If you do that then the service provider knows the clear text password to log into your network for every one of your users that actually uses the service. After all, the user has to provide the clear text password to the service provider’s login page to generate the hash value to compare to the hash you sent them. If that’s the same as the hash value in AD, then the service provider knows your AD password by definition.
Do you trust Google with the clear text AD passwords? I’m not picking on Google; there simply aren’t any service providers I would trust with that information.
Another alternative I have heard is that the service provider’s login page would make an LDAP bind call back to the VD with the supplied password to do the authentication. Again, that gives the service provider a clear text version of your AD password.
Are you sure you really want to do that?
But if your enterprise and your service provider can implement SAML, then the LDAP pull model looks a lot more compelling. I would be curious to hear from anyone that has implemented this or is thinking of implementing it. And if anyone is using the password hash sync approach, I would be interested in hearing about as well.
Posted in AD, Authentication, Google, Identity, Password Management, Provisioning, SAML, SPML, Standards, Virtual Directory
Tagged Authentication, Google, Provisioning, SAML, SPML, Virtual Directory
OptimalIdM has announce support for Microsoft WIF (you can get more info here). What they have done is pretty interesting. The have created an STS that front ends their Virtual Directory. This allows a single STS to be used to issue claims against multiple identity stores.
Of course the main use case here is the multiple AD forest scenario, but it could also support disparate identity stores such as other LDAP directories, databases, etc.
[Full disclosure: I have done consulting work for OptimalIdm in the past.]
Posted in AD, Identity, Identity Bus, Standards, Virtual Directory
Tagged AD, ADFS, Federation, Identity, OptimalIdM, Virtual Directory, WIF
Felix Gaehtgens of Kuppinger Cole has this to say about today’s virtual directory vendors:
As someone actively covering directory services and virtual directories, several innovations have caught my attention. The players within the virtual directory space are (in alphabetical order) Optimal IDM, Oracle, SAP, Radiant Logic, Red Hat, and Symlabs. When it comes to product development and innovation within the last year, you can split those vendors right down the middle. – Optimal IDM, Radiant Logic and Symlabs have been actively developing their product and churning out new features in new versions. The others have not been adding any features, but instead spent time changing logos, product names, default file locations and otherwise integrating the virtual directory products into the respective Oracle/RedHat/SAP identity management ecosystems. In fact, in some of the latter cases I ask myself whether it is likely to expect any virtual directory product innovations anymore.
I couldn’t help but notice that the entire virtual directory space as described by Mr. Gaehtgens spans only five letters of the alphabet (o through s). It doesn’t mean anything, but it’s still odd.
Ashraf Motiwala relays a statistic that %90 of all virtual directory deployments are used for authentication only. If true (and I don’t doubt it), this really isn’t surprising. Most enterprise software doesn’t support LDAP for anything but authentication, and a lot doesn’t even do that.
As I have said repeatedly, this single biggest impediment to enterprise identity management is that enterprise software seldom supports the externalization of identity. And it’s not really the vendors fault. The vendors are spending their development dollars on the features that their customers are asking for. Until customers start making externalized identity a selection criteria, the vendors are going to just do the minimum, which for many is authentication.
For instance in the product I currently work on, ChangeGear, we support LDAP in three ways. We support authentication and user profiles via either Windows Integrated Authentication or generic LDAP. We also support AD for allowing the users to pick lists of impacted users and groups when creating or processing Change Management Requests (RFCs). Lastly we also support AD as one of the means of discovering assets to populate our CMDB.
There are a lot of other interesting things we could be doing with LDAP, but our customers have not expressed much interest in them.
Posted in Authentication, Change Management, CMDB, Identity, Identity Management, LDAP, Software, Standards, Virtual Directory
Tagged Authentication, Change Management, ChangeGear, Identity, Identity Management, ITIL, ITSM
Jackson Shaw adds some interesting thoughts to the Virtual-Directory vs Directory debate here. He points out that the real lock-in comes with authentication:
And, finally, what’s the big deal about being “locked into AD”? Have people forgotten that AD *is* an LDAP directory? You get “locked into AD” when you use it for desktop authentication otherwise it’s just an LDAP directory with its own set of idiosyncrasies just like any other LDAP directory.
I would also add IIS Windows Integrated Authentication to that as well.
And this is a very interesting point. If you are using Windows Authentication for either the desktop or web login, the need to support multiple types of directories is greatly diminished.
Clayton Donley makes a very compelling argument that there is significant value is using a virtual directory even if an application only needs to access a single directory. So call me converted on that point.
Also, I should not have said that it’s not that difficult to write vendor independent LDAP code. It can be very difficult depending on what features are used. As Clayton points out there can be very significant differences between vendors in what should be standard behavior. I suspect there is also significant differences between virtual-directories as well, but I haven’t played with them enough to say for sure.
I often fall into the trap of thinking like a COTS software developer (since that is what I am), and forget the legions of in-house enterprise software developers. For COTS developers, writing vendor neutral LDAP code shouldn’t be that hard and should be the goal. For custom application development writing to a virtual directory may make a lot more sense. Especially if your enterprise has already deployed a virtual-directory.
It would be nice if someone maintained a KB of vendor specific LDAP behavior. If anyone knows of one that exists, please let me know.
And yes, IGF is coming. But it’s not available yet even for Java, much less .NET and scripting language developers.