Category Archives: Software

Did you get DC source code for Christmas?

Just in time for Christmas Samba 4.0 was released. This big news here is Samba 4.0 adds Active Directory Domain Controller emulation, including Kerberos, LDAP, DNS, and a bunch of other services.

While this is an impressive technical achievement, I don’t really see many enterprises adopting it. Samba 4 is fighting against one of the biggest IT pressures, headcount reduction. Most enterprises are now willing to pay more for the license cost of the software if it saves them administrative man hour costs.

So unless Samba 4 is going to be easier to install and maintain than Windows servers, it’s not really going to have an impact. Who knows, maybe it will be that easy. If you have Samba 4 in production drop me a comment and let me know what you think.

Meanwhile, Jackson Shaw is … unimpressed.

Advertisements

Polar opposites

I recently saw two polar opposite recommendations; one from Jeff Atwood begging you to not write code; and one from Radovan Semančík suggesting that the only practical software to use is open source software that you can fix as needed.

Obviously Radovan’s approach is not a scalable one. While there are a lot of terrible software products out there, especially in the enterprise space, there are also a lot of good ones that just work. Limiting yourself to coding solutions is a waste of time that most companies won’t pay for. Also Radovan’s solution limits you to open source solutions implemented in a language you are familiar with.

At the same time there are some problems that just need a coding solution, or are best solved that way.

 For enterprise solution I am going to thread the path between Jeff’s Scylla and Radovan’s Charybdis by posing these questions:

  • How much coding should be expected to implement an enterprise solution?
  • How can you find enterprise solutions the works well enough you don’t need the source code or extensive customizations?

An enterprise solution that requires you to write code or scripts to do basic functionality is not well designed, in my opinion. Coding or scripting should only be required wheen the functionality needed is unique to a specific deployment (or too uncommon enough to be a main feature of the product). This is a core philosophy at OptimalIdM as well. Although the VIS virtual directory does support .NET plug-ins, most of our customers never need one. When we have seen the need for plug-ins in the past we looked for a common feature that could be added to the product.

So not having to write code one measure an enterprise solution’s quality. Here are some others:

Ease of install – they say you only get one chance to make a good first impression and install time is it for enterprise software. If your vendor is telling you that you need consulting hours just to install the software, it’s not going to get better from there.

Ease of use – requiring training to use enterprise software is a bad sign. Did you have to have training to use your browser or word processor? Enterprise software should be like that.

Stability – once installed and configured the software should just work. Baby-sitting should not be required. And if you really need two weeks of work or the source code to figure out why your solution stopped working, you made a poor vendor choice.

So go ahead and write code, but only when you have to.

Specs, Patterns, and Provisioning

One of the most puzzling complaints I have heard about SPML is the search filter. The complaint is that it requires the service to support search filters of arbitrary complexity. I have never considered it that hard and have posted sample code to demonstrate it.

Still, perception has a reality of its own and search filters are often given as a reason not to support SPML.

So now that SCIM has finalized the 1.0 version, the filter-phobes can breathe easy, right? Not so much it seems. Like SPML, SCIM has a search filter mechanism that supports filters of arbitrary complexity. Which is a good thing for without that capability a provisioning service would be severely limited.

But really this should not be a reason to avoid either SPML or SCIM. This class of problem comes up regularly and provisioning service developers should learn how to handle it (if don’t already). One could argure that it would even be considered a pattern.

Actually it is: the Specification Pattern.

Stealing the keys to the kingdom

There are some interesting tidbits coming out about the Chinese hack of Google. Apparently the source code to Google’s SSO technology was a target (although this is misstated in the headline as a “password system”). It’s unknown at this point what source code (if any) was taken, but this highlights the nightmare scenario of the SSO world.

If a vulnerability is found in your token generation code such that someone can spoof a token, then your SSO system and every system connected to it is compromised.

Of course just having the source code is not in itself a problem. Typically there is a private key that is used to encrypt or sign the token. But protecting that private key is the issue and that is where the source code is key. If you think your key has been compromised you can replace it. But the code that authenticates the user and generates the token needs to get the private key to do the encryption (or signing (or both)). If the secret algorithm to access that key is compromised, then the attacker can then attempt to penetrate the system where that key lives and get the key. With the key and token generating code in hand the attacker can then access any SSO protected system.

And here is an ugly secret. If the SSO technology is public key encryption, they key on needs to exist where the token is initially generated. If it’s based on symmetric key encryption then the key has to exist on every server in the SSO environment.

So just use public key encryption, that solves the problem right? Not so fast. One critical aspect of SSO is inactive session timeout. That requires the token to be “refreshed” when used so that it expired based on inactivity. Refreshing the token at every server in the SSO system (every PEP if you will) requires either that server to have the key, or it make a remote calls to a common authentication service to refresh the token.

There are pluses and minuses to both approaches. One puts the keys to the kingdom in more locations but the other adds overhead to the token refresh. When security and performance collide, who do you think usually wins?

These kinds of trade offs are what make SSO so interesting to me.

Note that I am not talking about federated SSO (SAML or openid) or intranet SSO (Kerberos) as they present a different set of challenges.

CareMedic acquired by Ingenix

My current employer, CareMedic, has been acquired by Ingenix. The announcement is here. I am cautiously optimistic that this will be a good deal for both parties.

Good summary of Sun’s open IdM projects

Luca Mayer has this summary of Sun’s open source IdM projects. I have some experience with OpenSPML (obviously), and I have fiddled with OpenDS. There is some great stuff there.

I hope this all survives the acquisition.

Cool stuff, in twenty years

Felix Gaehtgens calls Microsoft onto the carpet about what it is ever going to do with U-Prove. Kim Cameron responds here with a call for patience. Both make good points, but I fear that as interesting as U-Prove is, it is way too far ahead of the market.

There are two reasons for this; first it is patent encumbered technology. Patent encumbered technologies fair very poorly in today’s market. After a few high profile patent fights, any technology that is patent encumbered is treated like nuclear waste by most vendors. Even if Microsoft adopts fair licensing terms it becomes a “get a lawyer first” barrier to adoption. In twenty years this won’t be a problem (so long is Microsoft doesn’t file for any more patents on related aspects).

Second, it solves a problem that the market doesn’t really care about today (although they should). This is the same problem that the notion of an Identity Oracle has. You haven’t heard much about that idea recently and for good reason. There is just no money to be made with it (yet). The use cases usually trotted out for both of these are typically edge conditions, my favorite being the RU/18 one. It’s like the Hello World of Identity.

The only people who REALLY care if you are over 18 when you buy something are your parents and the government.

In today’s world there are two privacy problems, under sharing and over sharing. Under sharing is when you have to fill out the same stupid questionnaire at every new doctor’s office you visit. Now that is an issue that people care about. I know they care about it because non-computer people complain to me about it often.

Over sharing is when you have to put your home address in to register for something even though shipping isn’t required. I almost never hear anyone complain about that and those that do just put bogus addresses in anyway. Maybe in twenty years the average person will care enough about privacy to worry about over sharing. But not today.

So U-Prove will be cool stuff in twenty years. Maybe.