Linux, The Standard VersionLinux, The Standard Version
The first public beta of version 4.0 of the <a href="http://ldn.linuxfoundation.org/article/lsb-beta-reveals-new-tools-features-developers" target="_blank">Linux Standard Base</a> is upon us, and with it comes some thoughts: How exactly <em>do </em>you standardize something as protean as Linux? Answer: Create a standard that goes with the flow.</p>
The first public beta of version 4.0 of the Linux Standard Base is upon us, and with it comes some thoughts: How exactly do you standardize something as protean as Linux? Answer: Create a standard that goes with the flow.
The reality of Linux as we have come to know it is that it is not going to be, never has been, and probably shouldn't be a single, monolithic entity. The folks who use and implement Linux have spent a good decade and a half making sure of that. Out of one comes many.
This has long been Linux's biggest strength and most troubling weakness. The same malleability that allows Linux to appear as an Android phone or in a TiVo set-top box also can be what keeps you from being able to correctly compile or run a given app on a somewhat left-of-center distribution.
Something of the same thing happened with Unix itself, back in the day. Apart from AT&T's System V version of Unix, there were (and still are) endless other variants: HP-UX, AIX, A/UX, ULTRIX, DGUX, Xenix , and on and on. This all sprouted from two things: first, Darpa contracting with Bill Joy at UC Berkeley to upgrade AT&T System III Unix to include a networking stack and virtual memory, resulting in BSD, and the need of various vendors to a) support their own hardware and b) be able to offer OS support to their customers.
Despite superficial claims of cross-compatibility, many of these Unixes weren't truly compatible with each other. My suspicion is that they were this way by design as a form of vendor lock-in. It's hard to write a truly cross-compatible Unix application when, say, the argument order of a critical system function is different in each one (and anyone who thinks #ifdefs are a "solution" to that problem is only kidding themselves).
Today the situation's mercifully quite different. Linux vendors and customers both have good reasons to make sure apps can run uniformly across different flavors of Linux. Granted, a given Linux distribution can offer something that another distro doesn't have, but there has to be a baseline that everyone can fall back on.
So who creates such a baseline? Preferably someone who doesn't have a vested interest in one distro over another (i.e., the Linux Foundation). The other thing that helps is to make the standard into something that can be evaluated by the creators themselves in a hands-on way. The LSB isn't just an abstraction; it's a toolset, something that complements the very malleability of the thing it's testing.
One other thing that is important, I think, is the way the LSB is elective, not mandatory. A mini-distro that's only designed for a couple of specific uses doesn't need to be LSB-compliant -- but if it earns that label, it's that much more useful. It's opt-in, but if opting in becomes its own reward, why not do it?
If you're planning to use the LSB yourself in some form, or you've run into situations where Linux's very mutability has been one of the things that's frustrated you, sound off below.
Follow me on Twitter: http://twitter.com/syegulalp
About the Author
You May Also Like