Developers are like bees. If you nurture them and care for them they will work incredibly hard for you. If you poke the hive or get them riled up you are in for a world of pain. At the end of the day if you blow a little smoke at them you can steal all the honey (kidding, kidding).
Apple hasn’t learned the bee nurturing lesson yet. They are busy poking the hives of the legions of want-to-be iPhone developers. It’s gotten so bad that they are now telling independent developers that even their rejection letters are covered by NDA (from ARS Technica):
And the situation only seems to be getting worse. Although the details aren’t clear, it appears Apple is now telling developers that the information included in their app rejection letters is covered under NDA. When Apple’s own words became controversial, instead of clearing the air it chose to try and force developers to keep quiet.
Here comes the bus… and Google’s under it.
Several big ISP threw Google under the bus at a recent Senate hearing about targeted ads (from ARS Technica):
At today’s Senate hearing, AT&T, Verizon, and Time Warner all showed an interest in developing an industry consensus on behavioral targeting that includes affirmative (opt-in) consent from consumers. They would like to see this adopted by all behavioral targeters, though, including ad agencies and search engines, so that ISPs aren’t at a disadvantage in the market. Good luck with that.
There is are some very big difference between what Google, Yahoo, Cuil, and other search engines do compared with what ISP want to do with the abhorrent NebuAd and Phorm products.
Using a search engine is an opt-in choice by it’s very nature. If I want a search engine that I know serves up targeted ads and tracks my search choices I can use Google. If I am bothered by the behavioral tracking I can use Cuil. Users have little choice in their ISP. There may be some choices, but it’s usually limited to a small number.
The search engines are very open that they serve up ads. They are right on the page. What the ISPs want to do is very sneaky. They want to sell your preferences to other who will serve the ads. Most users won’t even know it was the ISP spying on their traffic.
The telcos have been granted valuable monopolies in exchange for government oversight of their practices. It’s time for the government to stand up and tell them no on behavioral tracking.
I don’t blog about politics, although sometimes I blog about things that are intertwined with politics. The Palin email hack is one of those things that are fascinating on technical and social levels. Socially, as a libertarian with no party affiliation, I find it interesting to watch the outrage of the normally surveillance happy right wing paired with the non-caring of the normally privacy fanatical left.
Technically a lot of good summaries have been written about how the hack shows the weakness of knowledge based authentication. Mark Diodotti of Burton has particularly well written piece about it here. But there are several aspects of this that haven’t, so far as I am aware, been brought up.
First, this is usually described as a hack into Palin’s email account. That is true, but understates the depth of the problem. What was actually hacked was Palin’s Yahoo account which grants access to a number of Yahoo services including email. Another service is OpenID. The hacker would not only have obtained access to Palin’s email, but also every OpenID enabled account for which Palin had used Yahoo as the identity provider. In fairness this is no different that if an IdP password is compromised for SAML or InfoCard (except self-issued cards), but is does point out the down side to federation.
Second, the vulnerability was not in the primary means of authentication (password), but in the secondary means of authentication (forgot password). The lesson here is that security is chain that is only as strong as its weakest link. If the secondary means of authentication was made stronger you might still need to worry about the tertiary means, which in many systems involves calling a support number and convincing them you are the right person. In many cases that’s not a terribly difficult process if you have enough personal information about someone.
Third, security has to match expected use. That is really the story here. I have a Yahoo email account, but there is no reason to expect anyone to attempt to compromise using the same methods because there is no value to it. Security not through obscurity but lack of motivation. Palin elevated the value of hacking Yahoo by using it for official business (or at least appearing to). That’s not so say she wouldn’t have been a target, like many celebrities are, even if she had only an obviously personal email address, but she unwisely made a very inviting target.
So what are the lessons here?
In federation the security of all the relying parties is only secure as the least secure alternate means of authentication at the identity provider.
As a consumer we must be cautious of elevating the value of an identity provider beyond what is was designed for. This can happen because of social factors (as in Palin’s case) or by using it as a federated identity provider for a higher value relying party.
Posted in Authentication, Cardspace, Identity, Information Card, OpenID, Privacy, SAML, Security
Tagged Hacking, Identity, OpenID, Palin, Privacy, SAML, Security
Ping Identity’s Andre Durand makes some interesting observations about internal IdM vs Federation projects in the enterprise:
What’s interesting about our conversations is that invariably, they talk about one or more of their internal provisioning, IdM or WAM projects that is basically not meeting their expectations. What I find interesting about this is that federation deployments, by their very distributed nature, are taking an entirely different approach. Most if not all centralization projects are large, costly, complex and long. This makes them inherently more risky, and introduces higher and higher probabilities of failure at one or more levels.
He is absolutely right that centralization projects typically larger, more costly, and just plain harder than federation projects. Centralization is harder because it creates tight coupling between systems where as federation creates a loose coupling between systems. And tight coupling of systems made by different vendors (and often the same vendor) can be very hard and risky.
So Federation (loose coupling) is always the best way to go, right?
No. It’s often the best way to go but there are many times that tight coupling simply must be done. Often that means using a provisioning system and a means of synchronizing accounts (IBM TIM, Sun SIM, MS ILM, etc). Sometimes that means configuring your systems to centralize the identity (Quest Vintella, Centrify, etc).
And here is where I will let you in on the dirty little secret of provisioning. It’s really all about deprovisioning. Typically enterprises don’t care if it takes weeks for you to get access to all of the resources you need to do your job. They care in the abstract (usually), but not enough to actually do anything about it. But the minute your employment is ended, your access to all your enterprise resources needs to be turned off.
And for that you need centralization of some sort.
Of course I have seen plenty of cases where systems are tightly coupled from an identity standpoint but loose coupled from and authentication standpoint. This is where the two systems share synchronized or centralized accounts, but are SSO enabled via federation. One common use case is using federation to bridge different vendors WAM solutions.
Posted in Authentication, Identity, Identity Management, Password Management, Provisioning, Security
Tagged Enterprise Architecture, Federation, Identity, Identity Managerment, Provisioning, Security
William Vambenepe has an interesting experience with his bank and would, it seems, prefer indifference to incompetence:
Here we go again. Yet another institution who “takes the protection of [my] personal information very seriously” wrote to me to let me know that they lost some unencrypted backup tapes with my SSN and everything. In an way I’d prefer if they said that they don’t take the protection of my personal information seriously. Because now I have to assume that they are incompetent even at the tasks they take seriously, which presumably also includes performing financial transactions (it’s a bank). That they plead dumbness rather than carelessness kind of scares me.
Well, not really. This letter is just damage control of course and whatever reassuring verbiage they put doesn’t mean anything. Everyone is just playing pretend, which is how this whole “identify theft” problem started (“we’ll pretend that the SSN is confidential information and that we can use it to authenticate people”).
The logical fallacy here is that the two are not mutually exclusive. Your bank can be both indifferent to your privacy and incompetent at protecting it.
Once again I will make the futile case that all SSNs should be made public to end this nonsense. That’s not to say it won’t be replaced by other nonsense. But at least it would be new and entertaining nonsense.
I find it fascinating how quickly the industry is moving from deriding the IE8 Porn Mode to emulating it. Chrome and Safari apparently already have it, and now it’s coming to the Firefox world (via TechCrunch):
At one time, it was tough to cover your tracks after visiting some, uh, questionable sites, but now it’s getting easier than ever. According to Mozilla, Firefox is joining Safari, Internet Explorer 8, and Google Chrome in providing its users with a “private mode” that will not collect any of your browsing history or cookies in the upcoming release of Firefox 3.1.
Much like Chrome, users will be able to open a separate window in Firefox 3.1 that will let them browse the Web in any way they see fit without worrying about the wife or kids entering the History menu and seeing why they spent the last hour in the office with the door locked.
Despite the obvious porn driver, I am surprised that privacy advocates aren’t praising this feature. When using this mode the one of the most prevalent means of user tracking (permanent cookies) is done away with.
Google has apparently had a serious flaw in its implementation of “SAML” (quotes intentional) that has just been closed. Kim Cameron takes Google to the woodshed:
This is all pretty depressing but it gets worse. At some point, Google decided to offer SSO to third party sites. But according to the researchers, at this point, the scope still was not being verified. Of course the conclusion is that any service provider who subscribed to this SSO service – and any wayward employee who could get at the tokens – could impersonate any user of the third party service and access their accounts anywhere within the Google ecosystem.
My friends at Google aren’t going to want to be lectured about security and privacy issues – especially by someone from Microsoft. And I want to help, not hinder.
But let’s face it. As an industry we shouldn’t be making the kinds of mistakes we made 15 or 20 years ago. There must be better processes in place. I hope we’ll get to the point where we are all using vetted software frameworks so this kind of do-it-yourself brain surgery doesn’t happen.
Let’s work together to build our systems to protect them from inside jobs. If we all had this as a primary goal, the Google SSO fiasco could not have happened. And as I’ll make clear in an upcoming post, I’m feeling very strongly about this particular issue.
Jackson Shaw poses this interesting question:
Could Google’s predilection for those who have just emerged from the fountain of youth have contributed to this SSO “disaster”? Obviously, I don’t know if it is a contributing factor or not but I do wonder.
I wonder too. The irony is that there are any number of SAML experts that Google could have hired to help them and perhaps have avoided this embarrassment.
However I would like to take exception with Connor Cahill’s suggestion that all would be well if Google had just used pseudonymous IDs (such as SAML 2.0 Persistent IDs):
While I agree totally that the intended recipient should have been identified within an <AudienceRestriction> in the SAML assertion (how SAML shows the intended scope of the assertion) the problem would have been moot if Google used good pseudonymous identifiers for its users.
This is a very dangerous suggest as it implies that SAML is not secure enough without pseudonymous identifiers, the use of which makes SAML deployment a lot more complicated. Pseudonymous IDs are for privacy not security. If your system requires them to be secure, you have done something wrong. Period.