Category Archives: LDAP


The StormPath blog has an interesting article exploring HTTP PUT vs POST in REST based APIs for managing identity information. The article is interesting and worth reading, but misses the bigger picture. It points out that both HTTP PUT and POST can be used for sending updates in a REST API, but the HTTP spec mandates the HTTP PUT be idempotent. The idempotent requirement dictates that for an HTTP PUT, all values must be sent on the request, not just the ones being modified by the client.

Now I am sure idempotent PUT operations are important to people that design ways to update html documents. But I’m not in that business and neither are you. I am in the business of designing and enabling distributed identity systems, and in that business you never send a modification request that passes data you don’t need to modify. Simply put, you have to assume multiple concurrent updates to the backend data.

Put another way the article could simply have said “Never use HTTP PUT for data modification”. And herein lies the most important lesson of REST APIs: the REST mechanism is the means by which to build distributed systems, not an end to itself. The fact that you are using REST does not obviate the principals of basic distributed system design.

Oh, but it gets worse. Assuming your data model is attribute-value based, some of those attributes are going to be multi-valued attributes. Just as a client should only transmit the attributes that are modified, it should also only transmit the value modifications for multi-valued attributes.

That’s why LDAP Modify works as it does. One common mistake developers make using LDAP is not doing proper multi-valued attribute updates. Likewise your REST API will not only need to support partial record updates but partials attribute value updates.

Did you get DC source code for Christmas?

Just in time for Christmas Samba 4.0 was released. This big news here is Samba 4.0 adds Active Directory Domain Controller emulation, including Kerberos, LDAP, DNS, and a bunch of other services.

While this is an impressive technical achievement, I don’t really see many enterprises adopting it. Samba 4 is fighting against one of the biggest IT pressures, headcount reduction. Most enterprises are now willing to pay more for the license cost of the software if it saves them administrative man hour costs.

So unless Samba 4 is going to be easier to install and maintain than Windows servers, it’s not really going to have an impact. Who knows, maybe it will be that easy. If you have Samba 4 in production drop me a comment and let me know what you think.

Meanwhile, Jackson Shaw is … unimpressed.

Enter the Migrator

One common business case we get is to migrate from various directory servers to AD. This is usually an issue of per user license cost but lower maintenance is also a factor. Companies are realizing that since they are maintaining AD anyway, why pay for and maintain other multiple directory servers as well? For employee accounts it usually doesn’t make sense to have the same account in two places and need additional processes just to keep them in sync.

There are several ways you can migrate directories. You could use a one-time import/export, a Metadirectory, or a provisioning system, but these approaches have several key drawbacks. One issue is that in most cases you can’t migrate the user passwords. Another issue that the migration may require custom attributes to be added to AD (try getting your AD team to agree to that).

But the biggest issue is that these directories exist for a reason. There are client apps, sometimes tens or hundreds, which rely on the information in the old directories. Most home grown apps written for one directory won’t be able to switch over to AD without extensive rewriting. Even commercial apps that support AD may require significant and disruptive configuration changes.

Enter the Migrator (obscure Disney reference intended)

A virtual directory can be your Migrator. The solution is to standup a virtual directory that merges your AD with the old directory into a single view that emulates the old directory. Run both directories side by side while migrating the accounts. When a password change is made the virtual directory can update both AD and the old directory with the new value, so after running side-by-side long enough, most of the passwords will have been migrated. Eventually the old directory can be retired.

This approach has two main advantages:

  • no changes need to be made to the client applications
  • no schema changes need to be made to AD.

There is a good white paper that covers this in detail on the OptimalIdM web site (no registration required).

SPML and DSML search filters not so hard

One issue that has been raised in regards to SPML is search filters. SPML allows searches that optionally specify a starting point (in terms of an SPML container), a subset of data to return, and a search filter. In the DSML Profile, the search filter is naturally a DSML filter.

DSML filters can be arbitrarily complex, just like the LDAP filters they model. For instances a DSML filter could be something like “get everyone with the last name of smith”, or in DSML:

<filter xmlns=’urn:oasis:names:tc:DSML:2:0:core’>

<substrings name=’Name’><final>Smith</final></substrings>


Or it could be “get everyone with last name smith not located in Orlando”:

<filter xmlns=’urn:oasis:names:tc:DSML:2:0:core’>


<substrings name=’Name’><final>Smith</final></substrings>


<substrings name=’Office’>






Now if your back end data is stored in LDAP, then this is pretty easy to handle. Just convert to an LDAP filter and do any attribute name mappings required. If you backend data is SQL, it just slightly more difficult to translate the DSML filter into a SQL query clause.

But what if your back end data store doesn’t support a query mechanism? What if the data is in a flat file, or a NOSQL DB? What if the data is only accessible through an API that doesn’t allow for filtering?

There are several ways to solve that problem, but the easiest is to recursively walk the DSML filter and create a decision tree where each node determines if a given instance passes the part of the filter it knows.  The code for this is pretty simple in .NET and I posted an example here. Note that this example is just a partial implementation of the SPML search request for the purposes of demonstrating this concept. It is not a full featured implementation of SPML.

The basic idea is that an abstract data provider would return a dictionary of the attribute values for each entry in the data. The interface could look like (in C#):

public interface IUnfilteredDataProvider {

List <DSMLData> DoUnfilteredSearch();


In this example the sample data provider reads entries from a flat file. On each search request the filter is recursively read and turned into nodes in a decision tree. Each data entry is then passed to the decision tree and if it passes the filter it is appended to the returned results:

List<DSMLData> dataList = this._unfilteredProvider.DoUnfilteredSearch();

DataFilter df = GetDataFilter(searchRequest);

List<PSOType> returnPSOs = new List<PSOType>();

// return only those entries that pass the filter

foreach (DSMLData data in dataList) {

if (df.Pass(data)) {




The GetDataFilter method walks the DSML filter and constructs a decision tree (feel free to download the sample and look at the code for more details). No special meaning given to any of the attributes returned by the provider. They are all just treated as DSML attributes. Of course you will note a potential scalability issue with large data sets, but there are several tricks that can be used to minimize that (thoughts for a later post).

Oh, and this approach works great for creating a DSML service as well and the general concept would be just as easy to implement in Java.

So what does all this mean? Supporting filtered searches in an SPML or DSML service is really not that hard, even if your data is stored in a data store that does not support filtering.

Familiar Ground

Johannes Ernst is predicting the demise of the RDBMS (and by extension Oracle) due to the growing popularity of NoSQL. While these kinds of technology trends are hard to predict, there is a lot of logic to what Johannes is saying. He could very well be proven prophetic.

But this is familiar ground. We have been here before.

I remember in the mid 90’s when Object Databases were going to kill the RDBMS. Of course what really happened was that Object-Relational-Mapping APIs became popular instead.

Later XML Databases were going to kill the RDBMS. Instead RDBMS vendors added native XML capabilities to their mainline products.

There are specific functional areas where RDBMSs have been displaced. For instance LDAP directories have mostly replaced RDBMSs for identity and authentication information.  But this has not dented overall RDBMS usage.

So can NoSQL slay the RDBMS after OO and XML failed? Perhaps, but I wouldn’t short Oracle just yet.

Virtual Directories, O through S

Felix Gaehtgens of Kuppinger Cole has this to say about today’s virtual directory vendors:

As someone actively covering directory services and virtual directories, several innovations have caught my attention. The players within the virtual directory space are (in alphabetical order) Optimal IDM, Oracle, SAP, Radiant Logic, Red Hat, and Symlabs. When it comes to product development and innovation within the last year, you can split those vendors right down the middle. – Optimal IDM, Radiant Logic and Symlabs have been actively developing their product and churning out new features in new versions. The others have not been adding any features, but instead spent time changing logos, product names, default file locations and otherwise integrating the virtual directory products into the respective Oracle/RedHat/SAP identity management ecosystems. In fact, in some of the latter cases I ask myself whether it is likely to expect any virtual directory product innovations anymore.

I couldn’t help but notice that the entire virtual directory space as described by Mr. Gaehtgens spans only five letters of the alphabet (o through s). It doesn’t mean anything, but it’s still odd.

The Most Magical Bullet

Ashraf Motiwala has some interesting thoughts about why IdM POCs are “difficult”. Mike Trachta follows up with why the successful POCs cause headaches for the SIs that have to produce the wonderful scenarios shown in the POCs. Both of these posts are worth reading.

I would like to throw my two cents in as the developer backstopping both the sales engineer doing the POC and the SI putting together the production system.

IdM POCs and the following rollout are very difficult for two main reasons. First the customer is often already in a bad way and is looking for a magic bullet. The IdM salesman has sold him on the IdM product as a most magical bullet that will make their problems go away. Solve all your identity problems! Out of the box! Easy as pie! The winner of the POC is often the sales engineering who makes their demo closest to this fantasy as possible. Then the brunt of making that fantasy a reality falls on the SI, and depending on the size and motivation of the vendor, the product development team.

This is a very bad way for an enterprise to solve their identity problems. Lost is the trade-off analysis that should be happening. For example when the POC focuses on provisioning Unix accounts, there is never any discussion about externalizing the identity (via a PAM or similar framework) rather than synchronizing it. This kind of logic leads to deployments that are difficult to maintain, don’t scale, and need major follow on investments as the IT infrastructure changes. Instead of doing a POC of who has “The Most Magical Bullet”, enterprise would be better suited to craft a long term IdM strategy and chose a vendor whose product best aligns with it.

The second reason IdM POCs are so difficult is that so few IT systems support externalized identity. This is an old hobby-horse of mine, but everyone who has done IdM POCs knows the pain I am talking about. And of course there are little in the way of identity standards deployed in most enterprise system, with the exception of LDAP (or at least the AD flavor of it).

Until those two thing change, IdM POCs will continue to be difficult. And the vendor with the Most Magical Bullet will continue to win, often to the long-term detriment of the customer.