Analyze This!Analyze This!
A few days after I published a column discussing the <a href="http://www.linuxpipeline.com/showArticle.jhtml?articleId=166400446">open-source marvel that is ClamAV</a>, I heard from a PR rep for one of the big anti-virus vendors. I'll leave both his name and the firm's name out of this -- not because they did anything wrong (they didn't) but because I'm going to focus here on a single reference in this email that ultimately opened a much bigger can of worms.
A few days after I published a column discussing the open-source marvel that is ClamAV, I heard from a PR rep for one of the big anti-virus vendors. I'll leave both his name and the firm's name out of this -- not because they did anything wrong (they didn't) but because I'm going to focus here on a single reference in this email that ultimately opened a much bigger can of worms.
Here's the part of his email that caught my eye (italics mine):
"Most feel because of the 'number of eyes' that open source is exposed to that it will naturally be more secure . . . Burton Group calls this idea “the myth of more eyes”, and cites numerous cases where critical flaws were left undetected . . . One element Burton highlights is the potential for backdoors to be placed in open source without the community’s knowledge. These types of backdoors could become ripe targets for hackers and virus writers once Linux gains a larger foothold in the enterprise."
The Burton report in question is a piece entitled, Securing Open Source Infrastructure. The report is off-limits to non-subscribers, and the summary is unhelpful even by IT research firm standards. It turns out that the PR rep who sent the email actually got the gist of the quote I cited from a Network World article: Open Source Vs. Windows: Security Debate Rages, dated July 5. I'm inclined to think that the Burton Group gave the Network World reporter full access to the research report, since the summary on the Burton Web site doesn't reference any of this material.
In any case, when I first studied this nasty little chunk of FUD, two incidents involving enterprise software with confirmed, hard-coded, back-door logins came immediately to mind: the beyond-notorious Borland InterBase back-door login, which lurked in the product (if you can call plain-text strings visible with a simple ASCII dump of the executable binary "lurking") from 1994 until 2001, when outside developers stumbled across the back door in the newly open-source InterBase code; and the April, 2004 Cisco Systems alert confirming the presence of compiled-in logins within two critical, widely deployed network infrastructure software components.
These incidents involved two of the most respected and trusted enterprise software firms in the world. In InterBase's heyday, a large number of firms used the database server to store a wide variety of data assets, from the mundane to the priceless; it's safe to assume than many of these companies trusted Borland more than they trusted their own IT staffs.
It's possible, as everyone involved claims, that no attacker ever thought something as simple as a binary-file ASCII dump could produce a skeleton key to every copy of InterBase compiled since at least 1994. It's also possible that the Borland developers who created the back-door for legitimate reasons never got curious, or bitter, or angry, or drunk and mischievous while they held this secret.
it's also possible, however, that the InterBase back door facilitated the biggest IT security breach in history -- but that through a no less amazing miracle, the "visitors" were benign enough that the companies involved were able to sweep them under the rug and move on.
I'm only dredging up this history lesson to illustrate the stark contrast between the truth about back-door exploits, as opposed to the rotten chestnuts the Burton Group seems to have stashed in its contribution to the "raging" (and oddly-named) "Open Source vs. Windows" security debate.
On this point, however, let's tell it like it is: There is no "debate," because there are no examples of similar hard-coded, widely deployed trojans in an open-souce product.
In all fairness, according to Network World, the Burton Group report subsequently admitted that "dealing with traditional vendors isn't necessarily any better." Yet this is little more than a rhetorical band-aid that has exactly the effect on the report's earlier, FUD-inflicted injuries it was supposed to have: none.
Can you imagine the stink that would ensue if an analyst simply eyeballed the empirical trend here -- not an unusual research methodology for this bunch -- and penned a report warning that closed-source software is a proven source of undetectable, virtually unlimited security risk to any company that uses it?
That would never happen, of course, even if we had a few more incidents like Borland's and Cisco's close encounters with hard-coded disaster. Imagine, on the other hand, that we see even one comparatively minor scandal involving a back door in deployed open-source code for an enterprise product. For some reason, it's much easier to see exactly how that scenario would play, isn't it?
I'll lay out for now with two links that are well worth your time. The first is an April, 2005 BusinessWeek column by Steve Hamm chronicling his dead-end efforts to pry real, live, non-imaginary data from the original Who-Pays-These-People gang at the Yankee Group.
The second is a destination I've recommended before: David Wheeler's mind-bending tour de force, "Why Open Source / Free Software? Look At The Numbers," where, in his introduction, you can get a quick-and-dirty grasp of why IT industry research, as a profession, no longer has any credibility left to lose.
Matt McKenzie is editor of Linux Pipeline. A permanent link to this article is available here: Analyze This!
About the Author
You May Also Like