Category Archives: Software

Smart but dumb grids

Here is some news to help you sleep better at night (via InstaPundit):

You might have read about how we’re spending billions of dollars on a new electrical “smart grid” to make electrical distribution more efficient. A critical component of this grid is a new generation of “smart meters” which can communicate with the grid to determine when electricity is relatively scarce or plentiful.

Now a report in the Register describes how a researcher from security firm IOActive will demonstrate security flaws in these meters that could bring the grid down. Mike Davis, a senior security consultant for IOActive, says that the software in the vast majority of meters uses no encryption and requires no authentication before accepting commands to perform critical operations like updating their own software. Davis will demonstrate the flaws at the Black Hat security conference next month.

Apparently our good old friends memcpy and strcpy are involved.

Advertisements

Friends don’t let friends do crypto

Jeff Atwood has this post about the dangers of copying code from the internet and writing your own crypto routines. He includes this very funny play from Thomas Ptacek about writing crypto which also touches on one of my favorite topics, web SSO.

I do question Jeff’s comment about “thoroughly reviewing code”. These kinds of issues are very seldom found in code reviews from what I have seen in 20 years of doing them. It is very difficult to review someone else’s code and catch all the subtle land mines that might exist. You get lucky once in a while and find something, but all that does is reinforce a false sense of confidence that your review process is preventing these sorts of errors.

That’s not to say you shouldn’t do code reviews, but be realistic about it. Code reviews are useful to find obvious stuff and to share knowledge about the code across team members, but you aren’t going to find the subtle errors.

Software Language Impedance Mismatch

Ben Laurie rips a report on software security for suggesting that C and other languages introduce software insecurity because they don’t prevent buffer overflows:

So, what’s wrong with that statement? Firstly, I think we’ve got past the idea that there’s something extra special about buffer overflows as a security issue. Yes, there are many languages that prevent them completely (e.g. PHP, amusingly), but they don’t magically produce secure programs either. Indeed, pretty much all languages used for web development are “safe” in this respect, and yet the web is a cesspit of security problems, so how did that help?

Thirdly, talking about “unsafe” languages implies that there might be “safe” ones. Which is nonsense.

Ben makes the excellent point that languages such as Java and C# eliminate buffer overflow and a couple of other risks; they don’t necessarily result in more secure application. For example one problem I regularly see in practice is failing to prevent injection attacks (SQL, OS, etc). It doesn’t matter what language you develop in if you take the user supplied data and stick whatever they tell you into a SQL query.

And BTW a SQL injection attack on an unprotected system is a whole lot easier to do than a buffer overflow attack, especially for disgruntled ex-employees that have details of the back end DB. It’s not even hard.

But this does raise the interesting question, why do we choose the languages that we do? In some cases it’s because there is a team standard. Or perhaps you join a project already in progress. In some cases its chosen because its what you know best.

But sometimes it’s done to reduce impedance mismatch. For example I have on occasion had to write ISAPI filters and Apache modules. I suppose I could try to write an Apache module in Java, but I would spend an inordinate amount of time trying to bridge between the Java and C world. I could try to do it in C#, if the module only had to run in Apache for Windows, but again that take a lot of extra work and in the end it would be unlikely to be worth it.

As another example I have had to write provisioning connectors for both MIIS and Sun IdM. I wouldn’t try to write a MIIS connector in Java and I wouldn’t try to write a Sun IdM connector in C#.

So forget about people telling you what language is “safest” or “easiest”. Choose the one that fits the job at hand. If you don’t know a language that fits the job, team with someone who does or outsource the work.

But please do scrub those SQL statement parameters.

Who is we? You is.

Pogo said it best: “We have met the enemy and they is us”. Bob Blakely asks us the vendors four basic questions on security:

Are we willing to give anything up?

Are we willing to do anything different?

Are we willing to take any blame?

Are we willing to give any guarantees?

The answer to all of these questions really depends on who we is. I am not, for instance, willing to give up earning a paycheck in the software industry. Which means the things I am willing to give up or do different is constrained by the other we’s that determine the success or failure of my employer.

Let me give you an example; Vista UAC. Here is an example of a classic trade-off between security and convenience. And the users hate it. Worse for Microsoft it’s presence hasn’t helped sell Vista even for business use.

So what we are willing to give up and do different is constrained by a market that wants security without any additional cost, or effort on the end user.

As for blame, who should get it? Yes, vendors often do stupid things for which they should get blamed. But what about situations where there are different levels of security available and the end user chooses less than the most secure? Tried running your browser of choice without javascript enabled lately? Who get the blame for that? You for enabling javascript, the browser vendor for providing the capability in the first place, or all the web site designers who force you to enable javascript to view their site?

How about open source? Who gets the blame for those vulnerabilities?

As for guarantees, they are a good idea, but there has to be a limit. No software company can take the liability for the end users losses in a security breach. The reason is simple. The liability is open ended, but the cost of the software is not.

While Bob’s questions are interesting, they are not the important ones. The important questions are:

Are you as the consumer willing to factor security into your buying choices?

Are you willing to pay more for higher security?

Are you willing to have fewer features if it means a more secure system?

Are you willing to take responsibility for your own actions?

The answer to these questions today is no.

Dinning at the Caveat Emptor Café

I had seen the Ma.gnolia service a while back because it was an early adopter of OpenID. But I have not used the service, and now it looks like I never will:

In late January the social bookmarking service Ma.gnolia.com suffered a catastrophic technical failure that rendered all its users’ data irrecoverable. The service did have a data backup facility but it hadn’t been set up properly in the first place and hadn’t been subsequently tested to ensure it worked as expected.

In the fall out it transpires that the Ma.gnolia.com service was actually being run by a one-man-band operating on a shoe string budget with the minimum amount of hardware and software (in this case two Mac OS X servers and four Mac Minis).

There is a cautionary tale here for both cloud service providers and enterprises moving to cloud service. Cloud vendors need to have a tested and proven continuity plans. They need to know that when disaster strikes they will still be able to recover from it and get their customer back on line in a timely fashion. They also need to remember this about their continuity plans:

If you haven’t tested it, it doesn’t work.

Welcome to the Caveat Emptor Café my friends.

Risk, liability, and clouds

I had discussed the issue of software vendor liability here and made the point that no software vendor (now or in the near future) is going to assume the liability for the cost to your business if there are defects in the software. Recently Ed Cone listened to some Cloud vendors talk about security and had this to say about it:

The security model is so immature right now that it is clear that most of the assurances cloud vendors offer are around infrastructure and covering their own respective risks. Most cloud vendors will tell you outright that it is up to the customers to individually secure their own applications and data in the cloud, for example, by controlling which ports are open and closed into the customer’s virtualized instance within the cloud.

As Maiwald puts it, enterprises need to be aware of this distinction. Security in the cloud means different things to those offering cloud services and those using cloud services. Even if you’re working with the most open and forthright vendors who are willing to show you every facet of their SAS 70 audit paperwork and provide some level of recompense for security glitches on their end, they’re most certainly not assuming your risks. For example, if Amazon Web Services screws up and your applications are down for half a day, it’ll credit you for 110 percent of the fees charged for that amount of time but you’re still soaked for any of the associated losses and costs that come as a result of the downtime.

As organizations weigh the risks against the financial benefits of cloud computing, Maiwald believes they must keep in mind that , “There is risk that is not being transferred with that (cloud services) contract.”

There are several important points here; first outsourcing a service doesn’t mean outsourcing the risk. Likewise purchasing software isn’t the same as buying insurance either. Customers of both cloud services and on premise software need to understand this.

Second, when evaluating the risk of moving to a cloud based service you have to compare it against the risk of NOT moving to a cloud based service. There is the risk that your service provider could be compromised. But that has to be weighed against the risk that your own IT systems will be compromised. Likewise the risk of a service provider outage must be weighed against the risk on an internal system outage. Both will impact your business.

Third, you should also factor in opportunity risks. If you choose not to do something that reduces cost you take the risk of losing an opportunity that may have been available by dedicating those resources elsewhere.

Asymmetric Risk, Malpractice Insurance, and Personal Oxen

Bruce Schneier has two very interesting posts on his blog that stand out (to me at least) by their proximity to each other. Most recently Bruce has this to say about the recent financial meltdown:

The most interesting part explains how the incentives for traders encouraged them to take asymmetric risks: trade-offs that would work out well 99% of the time but fail catastrophically the remaining 1%. So of course, this is exactly what happened.

But three posts earlier Bruce has this to say about software vendors:

So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors — BitArmor will refund the purchase price.

Bottom line: PR gimmick, nothing more.

Yes, I think that software vendors need to accept liability for their products, and that we won’t see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won’t happen without the insurance companies; that’s the industry that knows how to buy and sell liability.

Talk about asymmetric risk. If software vendors accepted liability (or even partial liability) for anything that might happen as a result of their product, who in their right mind would ever go into the business? The problem is that liability is open-ended while the profit on each deal is not. It would nuts for any vendor to take such an asymmetric risk. It would be like an MD practicing medicine without malpractice insurance.

Which is, as Bruce alludes, how any such liability would ultimately by acceptable. Software vendors would buy liability insurance to protect themselves in the event that they are ever found at fault. Like malpractice insurances this would pool the risk and spread it over all the software vendors.

Which in the end eliminates any real incentive to avoid the mistakes to begin with. Sure the premiums would increase if found at fault, but just as with malpractice insurance the pain would be diluted by eventually raising every ones rates. And everyone would just price the rate increase into their business model exactly like the medical community does today. In the end it won’t really be the vendors money or risk.

And that’s what it really boils down to in the end. It’s a matter of exactly whose ox is getting gored. You notice that they only people suggesting that software vendors be held liable or otherwise punished for defects are not themselves producing software products. I have never seen a zero defect advocate that could actually deliver zero defect software.