Category Archives: SaaS

Federated Provisioning

Nishant Kaushik has a great (and funny) slide deck on federated provisioning on his blog. He discusses some distinctions between two flavors of federated provisioning, the Just-in-time (JIT) and what he terms advanced provisioning (often referred to as bulk provisioning).

I would like to clarify a couple of points in his presentation, however. He talks about a possible SAML profile of SPML for JIT provisioning. There was already an effort (which I lead) to define a SAML profile of SPML in Project Liberty (most of the work has already been done if anyone wants to revive it). But this was not for JIT provisioning as there is really no need for SPML when doing JIT provisioning. JIT provisioning can be done by SAML alone (or OpenID+other stuff). Rather the SAML profile of SPML was intended for advanced (bulk) provisioning. While the DSML profile could be used for advanced provisioning the Liberty TEG felt that using the SAML attributes assertions as the provisioning data structure was a better fit for advance provisioning accounts that would later be used in a SAML sign-on.

Me, I see it six one way, half dozen the other.

Another point the Nishant brought up is the need for the equivalent of the SAML attribute query in SPML. That already exists in the form of the SPML Search query which supports getting a single user record and asking for a subset of attributes.

When discussion whether JIT or advanced provisioning is appropriate, the points that are usually brought up are auditing, de-provisioning, and billing. But Nishant overlooks the most important point:

Do the identities have any meaning outside of authentication?

If the answer for your service is yes, than JIT provisioning is likely not an option.This is not a case of “tightly coupled” versus “loosely coupled”. Rather it is a matter of business requirements.

Take my current employer CareMedic (now part of Ingenix). We have a suite of medical revenue cycle management web apps that we host as SaaS. We need to know the identities of our customer users for the purposes of workflow before the user actually logs into the system.

Of course there are plenty of apps where the business requirements make JIT provisioning ideal. But it still comes down to the business requirements, not the technical architecture or standards.

Kloudy Kindle Kerfuffle

There are several themes emerging in the wake of the Kindle Kerfuffle (see this and this). These can be summarized as:

  1. Kindle is really a cloud service
  2. The recent file deletion kerfuffle is not about DRM, it’s about copyright protection
  3. Kindle had to delete the Orwell titles because they did not have the rights to sell it in the first place

The second and third arguments are the easiest to dispense of. Without getting into the legalities (I am not a lawyer) the simple fact that they promise not to do it in the future means that they didn’t have to do previously. And of course DRM is the main issue. Without DRM we wouldn’t be talking about it.

Which leaves the question, is Kindle really cloud service? The argument is that it is really a Cloud bookshelf where your books are cached to your local device. I don’t buy it. By that definition web pages are a cloud service because they are stored on a server and cached on your local device. The main thrust of Kindle is reading books on your Kindle reader. The fact that you can get books from the cloud is just a delivery mechanism, just like downloading software to run on your PC. If that makes it a cloud service, then please tell me what doesn’t qualify as a cloud server.

The Cyber Cynic raises this point:

But, it’s worse than that. Now, that we’ve discovered that Amazon can remotely and automatically delete your books without your knowledge or consent, what’s to stop Amazon, some other company, or the government from not merely deleting it, but replacing it with an edited version? Nothing.

It’s not that I’m not a trusting person… oh wait it is.

The cloud vs the castle

Here is an interesting story about one downside to SaaS:

Stealing the password for someone’s Gmail account, for example, not only gives the hacker access to that person’s personal e-mail, but also to any other Google applications they might use for work, like those used to create spreadsheets or presentations.

That’s apparently what happened to Twitter, which shares confidential data within the company through the Google Apps package that incorporates e-mail, word processing, spreadsheet, calendar and other Google services for $50 per user per year.

Co-founder Biz Stone wrote in a blog posting Wednesday that the personal e-mail of an unnamed Twitter administrative employee was hacked about a month ago, and through that the attacker got access to the employee’s Google Apps account.

Of course internal documents get compromised all the time in companies that use traditional methods of document sharing, but its still food for thought.

SaaS provisioning

Jackson Shaw makes the point that the last thing that most enterprises need is to take on is provisioning their SaaS identities when they are still struggling with their internal identities:

We have a standard called “Services Provisioning Markup Language” (SPML) which was specified to help provision identities via a web service. Does your SaaS vendor support that standard? I’ll bet they do not! What do you do then? I’ve met with hundreds of customers over the years and many are still struggling with provisioning inside the enterprise! Throw in SaaS provisioning – via some hairbrained interface because the vendor doesn’t support SPML – and it only adds to the organization’s identity management complexity.

Of course having an SPML capability in a SaaS is not going to be much help if the enterprise doesn’t have a provisioning system in place with SPML support. SPML support is not widely available in provisioning systems (although there are a few that have it out of the box).

Ashraf Motiwala echoes the point and also points out that enterprise are going to want to leverage not only their internal provisioning systems, but also their workflow and role management systems as well:

Recreating a workflow engine, role management, delegation, etc. in the cloud seems to just create redundancy for these capabilities, especially for organizations that have already dropped a few dollars to deploy an IdM solution on premise. Why would I drop my existing investment here? (Perhaps there is a compelling case, but I just don’t see it.) I would much rather find a solution that proxies the SPML requests from my existing provisioning solution that handles all the complexities (or “hairbrained interfaces”) for the SaaS apps on the backend!

The upshot is that SaaS vendors should be rolling out SPML interfaces to their services. But just like with the traditional enterprise software vendors, they most likely won’t do it until the customers demand it. Until it becomes a selection criteria it probably won’t happen.

The new lock-in

The Economist is warning of a new vendor lock-in, Cloud computing (hat tip to Suretec):

But now there is the danger of a new form of lock-in. “Cloud computing”—the delivery of computer services from vast warehouses of shared machines—enables companies and individuals to cut costs by handing over the running of their e-mail, customer databases or accounting software to someone else, and then accessing it over the internet. There are many advantages to this approach for both customers (lower cost, less complexity) and service providers (economies of scale). But customers risk losing control once again, in particular over their data, as they migrate into the cloud. Moving from one service provider to another could be even more difficult than switching between software packages in the old days. For a foretaste of this problem, try moving your MySpace profile to Facebook without manually retyping everything.

Good point.

Jurisdiction matters further

I wrote here about how legal jurisdiction affects privacy. Tim Cole writes here about how the EU Data Protection Directive affects cloud services:

Recently, at a press briefing by German IBM boss Stefan Jetter who waxed enthusiastic about Cloud Computing, an elderly journalist rose and asked him a show-stopper: “Where are my data when they’re out there in the Cloud?” Jetter did a double take, but my colleague pressed on: “I mean, physically, where are they?”

Of course, the answer is: On some nameless server somewhere, anywhere in a grid farm in Ohio or Dublin or… In fact, the usual answer is : Who cares?

Well, for one the German privacy protection agencies. Passing data across national boundaries can be a federal offense not only here. The EU Data Protection Directive (officially Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data) mandates that personal data may only be transferred to third countries if that country provides an adequate level of protection – something the U.S., just to name one, does not, at least not according to European standards, especially since foreigners do not benefit from the US Privacy Act of 1974.

This could well be problem for smaller cloud businesses that don’t have an EU hosting center. That means that they not only couldn’t serve EU based customers if their service included personal data, they also couldn’t serve multinational companies that had employees in the EU.

Jurisdiction matters

Bruce Schneier has this posting about privacy risks for Cloud software. These are all good points, but there is one that Bruce doesn’t mention. In fact few people are mentioning it which is a shame because it’s one of the biggest risks with using Cloud services: controlling legal authority.

In other words, in what country’s jurisdiction is your Cloud service? Do you know? Shouldn’t you know?

This was brought home recently when Germany-based RapidShare had to divulge its users IP addresses, according to this ARS Technica article:

The popular Germany-based file hosting service RapidShare has allegedly begun handing over user information to record labels looking to pursue illegal file-sharers. The labels appear to be making use of paragraph 101 of German copyright law, which allows content owners to seek a court order to force ISPs to identify users behind specific IP addresses. Though RapidShare does not make IP information public, the company appears to have given the information to at least one label, which took it to an ISP to have the user identified.

The issue came to light after a user claimed that his house was raided by law enforcement thanks to RapidShare, as reported by German-language news outlet Gulli (hat tip). This user had uploaded a copy of Metallica’s new album “Death Magnetic” to his RapidShare account a day before its worldwide release, causing Metallica’s label to work itself into a tizzy and request the user’s personal details (if there’s anything record labels hate, it’s leaks of prerelease albums). It then supposedly asked RapidShare for the user’s IP address, and then asked Deutsche Telekom to identify the user behind the IP before sending law enforcement his way.

What’s really interesting is this comparison to the laws governing US based and Germany based RapidShare:

There are, however, many differences between and RapidShare. For one, if were to find itself in the position RapidShare is in with GEMA, it would be able to argue that the Safe Harbor provision in the DMCA protects it from liability as long as it removes infringing content after being presented with a takedown notice. In Germany (and many other countries), there is no equivalent, meaning that RapidShare has little choice but to comply with the rulings. RapidShare’s incredible popularity-Germany-based deep packet inspection (DPI) provider Ipoque recently put out a report saying that RapidShare is responsible for half of all direct download traffic-has only made the issue more sensitive for the record labels and service providers alike.

Jurisdiction matters.

Dinning at the Caveat Emptor Café

I had seen the Ma.gnolia service a while back because it was an early adopter of OpenID. But I have not used the service, and now it looks like I never will:

In late January the social bookmarking service suffered a catastrophic technical failure that rendered all its users’ data irrecoverable. The service did have a data backup facility but it hadn’t been set up properly in the first place and hadn’t been subsequently tested to ensure it worked as expected.

In the fall out it transpires that the service was actually being run by a one-man-band operating on a shoe string budget with the minimum amount of hardware and software (in this case two Mac OS X servers and four Mac Minis).

There is a cautionary tale here for both cloud service providers and enterprises moving to cloud service. Cloud vendors need to have a tested and proven continuity plans. They need to know that when disaster strikes they will still be able to recover from it and get their customer back on line in a timely fashion. They also need to remember this about their continuity plans:

If you haven’t tested it, it doesn’t work.

Welcome to the Caveat Emptor Café my friends.

Risk, liability, and clouds

I had discussed the issue of software vendor liability here and made the point that no software vendor (now or in the near future) is going to assume the liability for the cost to your business if there are defects in the software. Recently Ed Cone listened to some Cloud vendors talk about security and had this to say about it:

The security model is so immature right now that it is clear that most of the assurances cloud vendors offer are around infrastructure and covering their own respective risks. Most cloud vendors will tell you outright that it is up to the customers to individually secure their own applications and data in the cloud, for example, by controlling which ports are open and closed into the customer’s virtualized instance within the cloud.

As Maiwald puts it, enterprises need to be aware of this distinction. Security in the cloud means different things to those offering cloud services and those using cloud services. Even if you’re working with the most open and forthright vendors who are willing to show you every facet of their SAS 70 audit paperwork and provide some level of recompense for security glitches on their end, they’re most certainly not assuming your risks. For example, if Amazon Web Services screws up and your applications are down for half a day, it’ll credit you for 110 percent of the fees charged for that amount of time but you’re still soaked for any of the associated losses and costs that come as a result of the downtime.

As organizations weigh the risks against the financial benefits of cloud computing, Maiwald believes they must keep in mind that , “There is risk that is not being transferred with that (cloud services) contract.”

There are several important points here; first outsourcing a service doesn’t mean outsourcing the risk. Likewise purchasing software isn’t the same as buying insurance either. Customers of both cloud services and on premise software need to understand this.

Second, when evaluating the risk of moving to a cloud based service you have to compare it against the risk of NOT moving to a cloud based service. There is the risk that your service provider could be compromised. But that has to be weighed against the risk that your own IT systems will be compromised. Likewise the risk of a service provider outage must be weighed against the risk on an internal system outage. Both will impact your business.

Third, you should also factor in opportunity risks. If you choose not to do something that reduces cost you take the risk of losing an opportunity that may have been available by dedicating those resources elsewhere.

Janus versus Vulcan in Federated Provisioning

I always thought the Roman God Janus should be the patron deity of security professionals. From the Wiki page on Janus:

In Roman mythology, Janus (or Ianus) was the god of gates, doors, doorways, beginnings and endings. His most prominent remnants in modern culture are his namesakes: the month of January, which begins the new year, and the janitor, who is a caretaker of doors and halls.

Likewise I see Vulcan as the symbol of those toiling in the system integration and provisioning fields. Crafting unsexy and necessary technologies that make disparate IT systems work together. From the Wiki page on Vulcan:

The Romans identified Vulcan with the Greek smith-god Hephaestus, and he became associated like his Greek counterpart with the constructive use of fire in metalworking.

This comparison came to me as I was reading the recent flurry of posts about Federated Provisioning. You can go here to read the latest from Dave Kearns, Pamela Dingle, Ian Glazer, and Nishant Kaushik. These are some really smart people who make some good points, but they are mostly talking past each other. This is a Mars versus Venus kind of debate, only it’s really Janus versus Vulcan.

The federation (Janus) side of the federated provisioning discussion tends to favor the just-in-time-provisioning, minimal disclosure, and privacy aspects of federation. The provisioning (Vulcan) side of the federated provisioning focuses on the total life-cycle of the identities known to both the service provider and consumer.

That’s not to say either is right or wrong. Both have good points. But there are serious holes in the just-in-time provision approach and Pamela’s just-on-usage deprovisioning suggestion. Let me give you an example to illustrate.

Suppose ACME is outsourcing one of its sales application. Now support the SaaS application has a feature that lets you (as a sales manager) assign a sales lead to one of your salesmen. Right off the bat you should see the first problem. How do you get the salesman’s information into the system if he has never performed the federated authentication step (I am intentionally being protocol agnostic here). Clearly ACME must have bulk provisioned all the salesmen to the application when setting up the service.

But what happens when a new salesman is hired? A provisioning process should be in place to provisioning the new salesman to all the SaaS services that need to know about him. But what happens if the new salesman doesn’t work out and is let go? Cleary ACME doesn’t want to him to still show up in the list of people that sales leads could get assigned to. Just-in-time-provisioning and deprovisioning just won’t be sufficient for many applications because the identity information needs to be synchronized before the user performs his first authentication. This can be true of any application where the users interact with each other.

In addition to the data synchronization problems in Federated Provisioning, there is also the auditing issue. How does ACME audit what its salesmen did in the application? Right now there isn’t a standard that covers “Federated Auditing”, but I predict that it is something customers are going to start asking for if the truly embrace SaaS.

I guess you could say I am more Vulcan than Janus.