Tag Archives: Identity Management

Security Policy Provisioning

There is an idea that has been kicked around in IdM for years called Security Policy Provisioning. Basically the idea is that you have a system that takes centrally managed security policies and pushes them out to disparate system, the same way provisioning systems manage user accounts. We kicked around the idea of building a Security Policy Provisioning product back at OpenNetwork,  but never did. In all honesty I had expected some IdM vendor to have added this feature to their provisioning engine by now, but as far as I know none ever went farther than user role management.

Well Axiomatics has apparently rolled it out in the guise of pushing their XACML policies to Windows Server 2012 to leverage the new authorization features. This is a very neat idea.

Of course after you push out the policies, Windows Server 2012 becomes the PDP as well as the PEP. You could develop a similar solution without using XACML at all.

Polar opposites

I recently saw two polar opposite recommendations; one from Jeff Atwood begging you to not write code; and one from Radovan Semančík suggesting that the only practical software to use is open source software that you can fix as needed.

Obviously Radovan’s approach is not a scalable one. While there are a lot of terrible software products out there, especially in the enterprise space, there are also a lot of good ones that just work. Limiting yourself to coding solutions is a waste of time that most companies won’t pay for. Also Radovan’s solution limits you to open source solutions implemented in a language you are familiar with.

At the same time there are some problems that just need a coding solution, or are best solved that way.

 For enterprise solution I am going to thread the path between Jeff’s Scylla and Radovan’s Charybdis by posing these questions:

  • How much coding should be expected to implement an enterprise solution?
  • How can you find enterprise solutions the works well enough you don’t need the source code or extensive customizations?

An enterprise solution that requires you to write code or scripts to do basic functionality is not well designed, in my opinion. Coding or scripting should only be required wheen the functionality needed is unique to a specific deployment (or too uncommon enough to be a main feature of the product). This is a core philosophy at OptimalIdM as well. Although the VIS virtual directory does support .NET plug-ins, most of our customers never need one. When we have seen the need for plug-ins in the past we looked for a common feature that could be added to the product.

So not having to write code one measure an enterprise solution’s quality. Here are some others:

Ease of install – they say you only get one chance to make a good first impression and install time is it for enterprise software. If your vendor is telling you that you need consulting hours just to install the software, it’s not going to get better from there.

Ease of use – requiring training to use enterprise software is a bad sign. Did you have to have training to use your browser or word processor? Enterprise software should be like that.

Stability – once installed and configured the software should just work. Baby-sitting should not be required. And if you really need two weeks of work or the source code to figure out why your solution stopped working, you made a poor vendor choice.

So go ahead and write code, but only when you have to.

Open Source IdM platforms

Radovan Semančík has put together an interesting list of open source IdM platforms. There are not as many as I would have expected.

It’s easier to push than be pulled

There is a lot of push vs pull provisioning discussion going on recently. Both models have a place but there is hard and fast rule you should consider. If your solution requires your customers to stand up a web service (SOAP or REST) you are going to be running uphill against a head wind. Customers see public web services as support and security costs they just don’t want to pay.

For most customers it’s far easier to have a system running in a data center that makes that makes web service requests to a provider than it is to stand up a service. Google, for one recognizes this. The Google Apps Directory Sync is often incorrectly cited as a “Pull Provisioning” technology when it is exactly the opposite. The sync software runs on the customer side, reads from the local directory and pushes to the Google Apps Provisioning Web Service.

A proper provisioning standard, regardless if it’s SOAP or REST, should support both push and pull (note that SPML supports both). But I really don’t see the pull model getting much traction for cloud based provisioning. Perhaps it will for provisioning internal applications.

SPML and DSML search filters not so hard

One issue that has been raised in regards to SPML is search filters. SPML allows searches that optionally specify a starting point (in terms of an SPML container), a subset of data to return, and a search filter. In the DSML Profile, the search filter is naturally a DSML filter.

DSML filters can be arbitrarily complex, just like the LDAP filters they model. For instances a DSML filter could be something like “get everyone with the last name of smith”, or in DSML:

<filter xmlns=’urn:oasis:names:tc:DSML:2:0:core’>

<substrings name=’Name’><final>Smith</final></substrings>

</filter>

Or it could be “get everyone with last name smith not located in Orlando”:

<filter xmlns=’urn:oasis:names:tc:DSML:2:0:core’>

<and>

<substrings name=’Name’><final>Smith</final></substrings>

<not>

<substrings name=’Office’>

<final>Orlando</final>

</substrings>

</not>

</and>

</filter>

Now if your back end data is stored in LDAP, then this is pretty easy to handle. Just convert to an LDAP filter and do any attribute name mappings required. If you backend data is SQL, it just slightly more difficult to translate the DSML filter into a SQL query clause.

But what if your back end data store doesn’t support a query mechanism? What if the data is in a flat file, or a NOSQL DB? What if the data is only accessible through an API that doesn’t allow for filtering?

There are several ways to solve that problem, but the easiest is to recursively walk the DSML filter and create a decision tree where each node determines if a given instance passes the part of the filter it knows.  The code for this is pretty simple in .NET and I posted an example here. Note that this example is just a partial implementation of the SPML search request for the purposes of demonstrating this concept. It is not a full featured implementation of SPML.

The basic idea is that an abstract data provider would return a dictionary of the attribute values for each entry in the data. The interface could look like (in C#):

public interface IUnfilteredDataProvider {

List <DSMLData> DoUnfilteredSearch();

}

In this example the sample data provider reads entries from a flat file. On each search request the filter is recursively read and turned into nodes in a decision tree. Each data entry is then passed to the decision tree and if it passes the filter it is appended to the returned results:

List<DSMLData> dataList = this._unfilteredProvider.DoUnfilteredSearch();

DataFilter df = GetDataFilter(searchRequest);

List<PSOType> returnPSOs = new List<PSOType>();

// return only those entries that pass the filter

foreach (DSMLData data in dataList) {

if (df.Pass(data)) {

returnPSOs.Add(data.GetPSO());

}

}

The GetDataFilter method walks the DSML filter and constructs a decision tree (feel free to download the sample and look at the code for more details). No special meaning given to any of the attributes returned by the provider. They are all just treated as DSML attributes. Of course you will note a potential scalability issue with large data sets, but there are several tricks that can be used to minimize that (thoughts for a later post).

Oh, and this approach works great for creating a DSML service as well and the general concept would be just as easy to implement in Java.

So what does all this mean? Supporting filtered searches in an SPML or DSML service is really not that hard, even if your data is stored in a data store that does not support filtering.

Federated Provisioning

Nishant Kaushik has a great (and funny) slide deck on federated provisioning on his blog. He discusses some distinctions between two flavors of federated provisioning, the Just-in-time (JIT) and what he terms advanced provisioning (often referred to as bulk provisioning).

I would like to clarify a couple of points in his presentation, however. He talks about a possible SAML profile of SPML for JIT provisioning. There was already an effort (which I lead) to define a SAML profile of SPML in Project Liberty (most of the work has already been done if anyone wants to revive it). But this was not for JIT provisioning as there is really no need for SPML when doing JIT provisioning. JIT provisioning can be done by SAML alone (or OpenID+other stuff). Rather the SAML profile of SPML was intended for advanced (bulk) provisioning. While the DSML profile could be used for advanced provisioning the Liberty TEG felt that using the SAML attributes assertions as the provisioning data structure was a better fit for advance provisioning accounts that would later be used in a SAML sign-on.

Me, I see it six one way, half dozen the other.

Another point the Nishant brought up is the need for the equivalent of the SAML attribute query in SPML. That already exists in the form of the SPML Search query which supports getting a single user record and asking for a subset of attributes.

When discussion whether JIT or advanced provisioning is appropriate, the points that are usually brought up are auditing, de-provisioning, and billing. But Nishant overlooks the most important point:

Do the identities have any meaning outside of authentication?

If the answer for your service is yes, than JIT provisioning is likely not an option.This is not a case of “tightly coupled” versus “loosely coupled”. Rather it is a matter of business requirements.

Take my current employer CareMedic (now part of Ingenix). We have a suite of medical revenue cycle management web apps that we host as SaaS. We need to know the identities of our customer users for the purposes of workflow before the user actually logs into the system.

Of course there are plenty of apps where the business requirements make JIT provisioning ideal. But it still comes down to the business requirements, not the technical architecture or standards.

SPML SIG at Catalyst

Bob Blakley announces here that the Burton Group identity blog has transitioned to individual Gartner blogs. He also announces an SPML SIG at the next two Catalysts.

It’s good to see attention being given to SPML again. But will this translate into real movement to adopt SPML (either 2.0 or a to be developed 3.0)? Perhaps, but we may have to take a step backwards in order to move forwards. If work starts on SPML 3.0 then that will effectively kill adoption of 2.0. But if 2.0 isn’t being adopted anyway, why not go ahead and do a 3.0?

Interesting times.