Amazon: Era Of Data Centers EndingAmazon: Era Of Data Centers Ending

At Amazon Web Service Summit 2012, Amazon's Adam Selipski cites "undeniable signs" of a cloud transition. Within 20 years, most enterprises will run entirely in the cloud, he says.

Doug Henschen, Executive Editor, Enterprise Apps

April 19, 2012

5 Min Read
information logo in a gray background | information

The era in which most big companies operate their own data centers is coming to a close. Instead they'll turn, slowly but surely, to the cloud. That's the bold prediction Amazon's Adam Selipsky, VP of product marketing, sales, and product management, made Thursday at Amazon's Web Service Summit 2012 in New York.

"That's a very big statement, but people are getting a glimpse of a future in which most enterprises will not own or operate data centers, and those that do will have small, special-purpose data centers," Selipsky told a crowd of more than 3,000 attendees at the Jacob Javits Convention Center.

Selipsky allowed the transition could take as long as "10 or 20 years," but he said there are "undeniable signs" that the move is already underway. A few of those signs were customer testimonials offered during the event, but more on that in a moment.

Selipsky is, of course, assuming that Amazon will be at the forefront of the move to the cloud. At one point during the event, Amazon CTO Dr. Werner Vogels shared stats from The 451 Group showing that Amazon currently has an approximate 60% share of the infrastructure-as-a-service market.

[ Want to survive an Amazon service outage? Read How Netflix, Zynga Beat Amazon Cloud Failure. ]

Selipsky also allowed that Amazon "obviously has a lot more work ahead of it" if masses of enterprise customers are to use Amazon as their data center service. With that he highlighted a handful of the 28 upgrades and releases AWS has delivered in the first three months of this year, including Direct Connect and AWS Storage Gateway. Direct Connect lets companies connect to AWS through private, high-bandwidth connections that bypass the Internet. AWS Storage Gateway lets enterprise customers store snapshots of their data center on the Amazon S3 service for backup and recovery.

Executives also highlighted Amazon DynamoDB, a high-scale NoSQL database service released in February, which Vogels called the most significant new service delivered by Amazon this year because it "eliminates database scaling and performance as a roadblock" to running applications.

The best evidence that Amazon (and its cloud competitors) just might be able to make good on Selipsky's prediction came through a handful of customer testimonials. Ryan Park, operations and infrastructure leader at Pinterest explained that the fast-growing social website is run entirely on AWS and that, until last month, he was the only engineer on staff. The site currently attracts more than 17 million unique visitors per month and it manages 410 terabytes of data on AWS. That requires 90 high-memory instances of Amazon EC2 compute capacity and 64 pairs of sharded databases (with one master and one slave for redundancy in each pair).

Pinterest also uses Amazon Elastic MapReduce Hadoop processing at a cost of "a few hundred dollars per month," Park said, noting that an unnamed company handling similar processing loads has two full-time employees just to keep a Hadoop cluster running on premises.

Amazon customer CycleComputing uses AWS capacity to handle supercomputing challenges. In the company's largest project to date, CycleComputing was able to draw on 51,000 compute cores from across Amazon's global AWS compute capacity to test potential cancer drugs for the pharmaceutical research firm Schrodinger. The capacity was harnessed in the cloud within a matter of hours, and, according to CycleComputing CEO Jason Stowe, it packed the equivalent power of $20 million to $25 million worth of supercomputing infrastructure. But at Amazon's cloud rates, it cost just $4,828.85 per hour to run the system, and it took only three hours to complete the analysis.

"This means any researcher with a National Science Foundation grant or any person at an academic institution or anyone at a large corporation can now do science that's impossible to do on an internal system in so short a timeframe," Stowe said.

In a third customer testimonial, Jon Brendsel, PBS's VP of products explained the TV network's use of AWS to serve up content and streaming video to more than 30 million unique visitors per month and an average of 115,000 unique mobile visitors per day.

Video streaming is, of course, at the heart of PBS's compute and storage needs, which are supported by nearly 70 databases running on Amazon EC2 and more than 170 storage "buckets" on the Amazon S3 storage service. PBS is streaming 49% of its 145 million monthly video views to iPads, iPhones, and other smartphones, and it's driving much of the video viewing growth.

Three years ago PBS was serving up about 200 terabytes of streaming video per month. Today, one year after the debut of a PBS iPad app, the content provider is streaming more than 40 petabytes of video per month.

"We've grown a lot, but with Amazon's infrastructure, we're set to scale significantly," Brendsel said.

Vogels and Selipsky talked at length about security and reliability, with Vogels detailing Amazon's eight regions of availability, each with separate "availability zone" data centers on separate seismic grids and separate power grids. Of course, despite Amazon's many system redundancies, it has been repeatedly proven that it is not immune to service outages, as we've reported at length.

Vogels also pointed out that Amazon has reduced AWS prices 19 times as it has gained economies of scale. S3 customers saw costs drop by as much as 40% with a price cut earlier this year, he said, while some EC2 users saw their costs drop by as much as 32%. It's this cost promise that may eventually wear down would-be customers who might not otherwise consider breaking out of what Vogels described as "the traditional model of enterprise software development."

"In the old style, enterprises were held hostage with long-term contracts because that was the only way you could drive costs down," Vogels said. "We believe that's wrong ... and if you help us gain additional economies of scale, we believe you should benefit. And that's why we've reduced our pricing."

The pay-as-you go nature of the cloud makes ROI calculation seem easy. It's not. Also in the new, all-digital Cloud Calculations information supplement: Why infrastructure-as-a-service is a bad deal. (Free registration required.)

Read more about:

20122012

About the Author

Doug Henschen

Executive Editor, Enterprise Apps

Doug Henschen is Executive Editor of information, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of Transform Magazine, and Executive Editor at DM News. He has covered IT and data-driven marketing for more than 15 years.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights