Tag Archives: Software

Stealing the keys to the kingdom

There are some interesting tidbits coming out about the Chinese hack of Google. Apparently the source code to Google’s SSO technology was a target (although this is misstated in the headline as a “password system”). It’s unknown at this point what source code (if any) was taken, but this highlights the nightmare scenario of the SSO world.

If a vulnerability is found in your token generation code such that someone can spoof a token, then your SSO system and every system connected to it is compromised.

Of course just having the source code is not in itself a problem. Typically there is a private key that is used to encrypt or sign the token. But protecting that private key is the issue and that is where the source code is key. If you think your key has been compromised you can replace it. But the code that authenticates the user and generates the token needs to get the private key to do the encryption (or signing (or both)). If the secret algorithm to access that key is compromised, then the attacker can then attempt to penetrate the system where that key lives and get the key. With the key and token generating code in hand the attacker can then access any SSO protected system.

And here is an ugly secret. If the SSO technology is public key encryption, they key on needs to exist where the token is initially generated. If it’s based on symmetric key encryption then the key has to exist on every server in the SSO environment.

So just use public key encryption, that solves the problem right? Not so fast. One critical aspect of SSO is inactive session timeout. That requires the token to be “refreshed” when used so that it expired based on inactivity. Refreshing the token at every server in the SSO system (every PEP if you will) requires either that server to have the key, or it make a remote calls to a common authentication service to refresh the token.

There are pluses and minuses to both approaches. One puts the keys to the kingdom in more locations but the other adds overhead to the token refresh. When security and performance collide, who do you think usually wins?

These kinds of trade offs are what make SSO so interesting to me.

Note that I am not talking about federated SSO (SAML or openid) or intranet SSO (Kerberos) as they present a different set of challenges.

Advertisements

Who is we? You is.

Pogo said it best: “We have met the enemy and they is us”. Bob Blakely asks us the vendors four basic questions on security:

Are we willing to give anything up?

Are we willing to do anything different?

Are we willing to take any blame?

Are we willing to give any guarantees?

The answer to all of these questions really depends on who we is. I am not, for instance, willing to give up earning a paycheck in the software industry. Which means the things I am willing to give up or do different is constrained by the other we’s that determine the success or failure of my employer.

Let me give you an example; Vista UAC. Here is an example of a classic trade-off between security and convenience. And the users hate it. Worse for Microsoft it’s presence hasn’t helped sell Vista even for business use.

So what we are willing to give up and do different is constrained by a market that wants security without any additional cost, or effort on the end user.

As for blame, who should get it? Yes, vendors often do stupid things for which they should get blamed. But what about situations where there are different levels of security available and the end user chooses less than the most secure? Tried running your browser of choice without javascript enabled lately? Who get the blame for that? You for enabling javascript, the browser vendor for providing the capability in the first place, or all the web site designers who force you to enable javascript to view their site?

How about open source? Who gets the blame for those vulnerabilities?

As for guarantees, they are a good idea, but there has to be a limit. No software company can take the liability for the end users losses in a security breach. The reason is simple. The liability is open ended, but the cost of the software is not.

While Bob’s questions are interesting, they are not the important ones. The important questions are:

Are you as the consumer willing to factor security into your buying choices?

Are you willing to pay more for higher security?

Are you willing to have fewer features if it means a more secure system?

Are you willing to take responsibility for your own actions?

The answer to these questions today is no.

Risk, liability, and clouds

I had discussed the issue of software vendor liability here and made the point that no software vendor (now or in the near future) is going to assume the liability for the cost to your business if there are defects in the software. Recently Ed Cone listened to some Cloud vendors talk about security and had this to say about it:

The security model is so immature right now that it is clear that most of the assurances cloud vendors offer are around infrastructure and covering their own respective risks. Most cloud vendors will tell you outright that it is up to the customers to individually secure their own applications and data in the cloud, for example, by controlling which ports are open and closed into the customer’s virtualized instance within the cloud.

As Maiwald puts it, enterprises need to be aware of this distinction. Security in the cloud means different things to those offering cloud services and those using cloud services. Even if you’re working with the most open and forthright vendors who are willing to show you every facet of their SAS 70 audit paperwork and provide some level of recompense for security glitches on their end, they’re most certainly not assuming your risks. For example, if Amazon Web Services screws up and your applications are down for half a day, it’ll credit you for 110 percent of the fees charged for that amount of time but you’re still soaked for any of the associated losses and costs that come as a result of the downtime.

As organizations weigh the risks against the financial benefits of cloud computing, Maiwald believes they must keep in mind that , “There is risk that is not being transferred with that (cloud services) contract.”

There are several important points here; first outsourcing a service doesn’t mean outsourcing the risk. Likewise purchasing software isn’t the same as buying insurance either. Customers of both cloud services and on premise software need to understand this.

Second, when evaluating the risk of moving to a cloud based service you have to compare it against the risk of NOT moving to a cloud based service. There is the risk that your service provider could be compromised. But that has to be weighed against the risk that your own IT systems will be compromised. Likewise the risk of a service provider outage must be weighed against the risk on an internal system outage. Both will impact your business.

Third, you should also factor in opportunity risks. If you choose not to do something that reduces cost you take the risk of losing an opportunity that may have been available by dedicating those resources elsewhere.

Don’t poke the bees

Developers are like bees. If you nurture them and care for them they will work incredibly hard for you. If you poke the hive or get them riled up you are in for a world of pain. At the end of the day if you blow a little smoke at them you can steal all the honey (kidding, kidding).

Apple hasn’t learned the bee nurturing lesson yet. They are busy poking the hives of the legions of want-to-be iPhone developers. It’s gotten so bad that they are now telling independent developers that even their rejection letters are covered by NDA (from ARS Technica):

And the situation only seems to be getting worse. Although the details aren’t clear, it appears Apple is now telling developers that the information included in their app rejection letters is covered under NDA. When Apple’s own words became controversial, instead of clearing the air it chose to try and force developers to keep quiet.