Livermore Labs Turns To Linux-Based Supercomputing ClustersLivermore Labs Turns To Linux-Based Supercomputing Clusters
Lawrence Livermore National Laboratory will get 100 teraflops combined processing power out of the supercomputer when it has all four of its new clusters up and running.
Underscoring the supercomputing market's move toward clusters, the Lawrence Livermore National Laboratory just got the first of four Linux-based clusters that researchers plan to put to work doing climate studies, astrophysics, and tracking the lifespan of the country's nuclear weapon stockpile.
The lab had outgrown a 4-year-old, 11-teraflop machine that it used for non-classified research. It'll be able to turn to the four new clusters, which are being built by Appro, a high-performance server, storage, and high-end workstation vendor. When working together, the four clusters can provide 100 teraflops of processing power. A teraflop is 100 trillion calculations per second.
"In general, the demand for high-performance computing in the research community has been growing steadily," says Don Johnston, a spokesman at the Lawrence Livermore labs. "We're trying to meet that demand. With the old cluster, there [were too many] demands being made for its time. It was difficult for people to get on the machine."
To get rid of the long queue for the supercomputer, Johnston says Livermore Labs decided to bring in a group of clusters. Each of the four can work on a major problem, multiplying the number of researchers who can get their work done at the same time. Appro has completed the first of the four clusters. Code-named Rhea, the Infiniband interconnected cluster has 576 AMD Opteron 8000 Series processors with a peak processing power of 22 teraflops.
The other three clusters are scheduled to be completed by the end of Q1 2007. When finished, the cluster group is designed to offer researchers 2,592 4-socket, 8-core nodes and about 100 teraflops of processing power.
When the four clusters are up and running, they'll be the third-largest supercomputer at Lawrence Livermore.
"It reflects other moves in the cluster space over the last couple of years," says Charles King, principal analyst of Pund-IT Research. "They're made to be flexible. They're set up to run independently or in different cluster configurations and that's a good thing. You've got a collective hardware solution that can be configured and reconfigured after a project is done in order to use the same infrastructure for a project that might require slightly different things."
King adds that this is a strong win for Appro, which he says must have seen the "lightning in the bottle" with clusters. As of June 2001, there were only about 30 clusters on the Top 500 Supercomputer list, he says. In June of this year, there were 350.
"Clusters are the dominant technology in supercomputing at this particular point," King says. "They've allowed a lot of companies and researchers to leverage X86 systems, which has brought the price of supercomputing way down over the last five or six years. It's not cheap by any means, but it's a hell of a lot cheaper than it used to be."
Addison Snell, an analyst at IDC, adds that Appro is putting this cluster together with off-the-shelf components, avoiding big iron, and that enables it to avoid a huge price tag, as well. For a long time, only a few well-funded research labs and a handful of giant companies could afford to get into supercomputing. Clusters are enabling more and more companies to consider it as an option, Snell says.
"A lot of the growth in high-performance computing has come from new users in the market," he adds. "Clusters are helping to make that happen."
Livermore Labs' Johnston says the initial procurement price for the four clusters came in at $15 million.
"With greater computing power, you're able to run more complex simulations," he says. "There are things you can do on those machines that simply couldn't be done before. This will absolutely be a big boost."
About the Author
You May Also Like