Monthly Archives: April 2009

It’s the model stupid

William Vambenepe has a great write up on past and current IT management standards efforts here (and rifs on the famous Bill Clinton internal campaign motto):

I wish that rather than being 80% protocols and 20% models, the effort in the WS-based wave of IT management standards had been the other way around. So we’d have a bit more to show for our work, for example a clear, complete and useful way to capture the operational configuration of application delivery services (VPN, cache, SSL, compression, DoS protection…). Even if the actual specification turns out to not make it, its content should be able to inform its successor (in the same way that even if you don’t use CIM to model your server it is interesting to see what attributes CIM has for a server).

It’s less true with protocols. Either you use them (and they’re very valuable) or you don’t (and they’re largely irrelevant). They don’t capture domain knowledge that’s intrinsically valuable. What value does WSDM provide, for example, now that’s it’s collecting dust? How much will the experience inform its successor (other than trying to avoid the WS-Addressing disaster)? The trend today seems to be that a more direct use of HTTP (“REST”) will replace these protocols. Sure. Fine. But anyone who expects this break from the past to be a vaccination against past problems is in for a nasty surprise. Because, and I am repeating myself, it’s the model, stupid. Not the protocol. Something I (hopefully) explained in my comments on the Sun Cloud API (before I knew that caring about this API might actually become part of my day job) and something on which I’ll come back in a future post.

I can sympathize with William. I wish that in the SPML effort we had spent more time working on the model up front. The plan had always been to finalize the protocol and then work on the model. As a result the model work never really got properly addressed, although there is a possibility it might someday.

An OpenID game changer

One theme I have harped over the last year of so is that it means little for the big content providers to become OpenID providers if they don’t also become relying parties. You can’t build a highway with nothing but on ramps.

So far the vast majority of OpenID announcements by the big players have been to be yet another OP, or just signing up for the OpenID Foundation. It looks like the game is finally changing. Apparently Facebook is getting ready to become an OpenID Relying Party. From Inside Facebook:

Less than three months after joining the OpenID Foundation’s board as a sustaining corporate member (i.e. putting its weight and financial support behind OpenID), Facebook has just announced at the “technology tasting” event this afternoon at its Palo Alto headquarters that users will soon be able to log in to Facebook with their OpenID.

This could be huge for OpenID adoption, if it really happens.

IP logs? What IP logs?

Now here is an interesting trend in Sweden, ISPs expunging IP Logs so they can’t be turned over to the government. From ARS Technica:

Another Swedish ISP has decided not to retain customer IP records in an attempt to protect user anonymity. Tele2 announced this week that it plans to start deleting all IP records after they had been used internally-a move that is still legal under Swedish law, but is beginning to irk law enforcement.

Tele2 is the second major ISP to announce such a plan this month. The first was Bahnhof, which said earlier this month that it refused to keep any log files to hand over to authorities. Both ISPs are reacting to IPRED, the Intellectual Property Rights Enforcement Directive. The Swedish incarnation of this European directive went into effect on April 1, and it allows courts to force ISPs to turn over user data in cases of suspected copyright infringement. Because of this loss of anonymity, Internet traffic in Sweden saw an immediate drop.

Jurisdiction matters

Bruce Schneier has this posting about privacy risks for Cloud software. These are all good points, but there is one that Bruce doesn’t mention. In fact few people are mentioning it which is a shame because it’s one of the biggest risks with using Cloud services: controlling legal authority.

In other words, in what country’s jurisdiction is your Cloud service? Do you know? Shouldn’t you know?

This was brought home recently when Germany-based RapidShare had to divulge its users IP addresses, according to this ARS Technica article:

The popular Germany-based file hosting service RapidShare has allegedly begun handing over user information to record labels looking to pursue illegal file-sharers. The labels appear to be making use of paragraph 101 of German copyright law, which allows content owners to seek a court order to force ISPs to identify users behind specific IP addresses. Though RapidShare does not make IP information public, the company appears to have given the information to at least one label, which took it to an ISP to have the user identified.

The issue came to light after a user claimed that his house was raided by law enforcement thanks to RapidShare, as reported by German-language news outlet Gulli (hat tip). This user had uploaded a copy of Metallica’s new album “Death Magnetic” to his RapidShare account a day before its worldwide release, causing Metallica’s label to work itself into a tizzy and request the user’s personal details (if there’s anything record labels hate, it’s leaks of prerelease albums). It then supposedly asked RapidShare for the user’s IP address, and then asked Deutsche Telekom to identify the user behind the IP before sending law enforcement his way.

What’s really interesting is this comparison to the laws governing US based last.fm and Germany based RapidShare:

There are, however, many differences between Last.fm and RapidShare. For one, if Last.fm were to find itself in the position RapidShare is in with GEMA, it would be able to argue that the Safe Harbor provision in the DMCA protects it from liability as long as it removes infringing content after being presented with a takedown notice. In Germany (and many other countries), there is no equivalent, meaning that RapidShare has little choice but to comply with the rulings. RapidShare’s incredible popularity-Germany-based deep packet inspection (DPI) provider Ipoque recently put out a report saying that RapidShare is responsible for half of all direct download traffic-has only made the issue more sensitive for the record labels and service providers alike.

Jurisdiction matters.

Cyber-attack in Morgan Hill

Bruce Perens has an interesting article about an event that garnered far less attention than it should:

Just after midnight on Thursday, April 9, unidentified attackers climbed down four manholes serving the Northern California city of Morgan Hill and cut eight fiber cables in what appears to have been an organized attack on the electronic infrastructure of an American city. Its implications, though startling, have gone almost un-reported.

That attack demonstrated a severe fault in American infrastructure: its centralization. The city of Morgan Hill and parts of three counties lost 911 service, cellular mobile telephone communications, land-line telephone, DSL internet and private networks, central station fire and burglar alarms, ATMs, credit card terminals, and monitoring of critical utilities. In addition, resources that should not have failed, like the local hospital’s internal computer network, proved to be dependent on external resources, leaving the hospital with a “paper system” for the day.

The attack is as mysterious as it was successful. I suspect that the “disgruntled ex-telco worker(s)” theory is probably the best explanation.

I loved how the local Ham Radio enthusiasts came to the rescue.

Some random thoughts about Kantara

I would like to congratulate all the Liberty, Concordia, ICF, XDI, and other standards folks for launching the Kantara Initiative.  I have no doubt that there will be a lot of interesting work done under this umbrella.

That’s to not say there aren’t nay-sayers. There always are. My advice to those involved in KI is to ignore the nay-sayers and go solve problems. And have fun doing it.

As to whether the audience of technology adopters will find value in the results, I can’t venture a guess.

Oddly enough the first time I read about it I thought it was the “Katana Initiative”. Damn dyslexia.

[Full Disclosure: I was the BMC representative to Project Liberty]

Apparently more credible

Some researchers in Austria are trying to tackle a very tough issue, blog credibility ranking. Unfortunately (if this blog article is credible) they are building their house on a very shaky foundation:

The proliferation of widespread Internet access has enabled everyone and their dog to start a website, but not every one is filled with what some of us would describe as “credible” information. That’s why some researchers are attempting to create software that can analyze Web content and automatically rank it to help out those who can’t quite decide for themselves.

Researchers at the Austria-based Know-Center are working on a program that analyzes the language used on blogs in order to rank them as highly credible, having average credibility, or “little credible.” The code looks at the distribution of words over time, and compares blog topics against articles from mainstream news, which are apparently weighted as being more credible.

[Emphasis added]

There are a whole host of reasons why using the mainstream media as a credibility benchmark is a bad idea, but the biggest reason is that the media really does a poor job in general of getting facts right. That’s not a criticism and really no one should expect any different.

The mainstream media generally gets things wrong simply because the content is generated by people that are generally not subject matter experts on the things they are writing about. They are experts at writing and (usually) journalism. Yet they are called upon to write about a vast universe of subjects that they are barely familiar with.

If you don’t believe me think back to the last time you read an article in the media that concerned a subject you were an expert at. Did you feel that the article accurately portrayed the crucial issues? Did it do it better than the dozens of blog entries you might have read on the same subject?