Amazon Adds PostgreSQL, Big C3 ServersAmazon Adds PostgreSQL, Big C3 Servers

Amazon CTO Werner Vogels announces PostgreSQL database service, new instance types, and the use of solid state disks to speed I/Os.

Charles Babcock, Editor at Large, Cloud

November 15, 2013

5 Min Read
information logo in a gray background | information

Amazon Web Services CTO Werner Vogels this week unveiled a new database service based on PostgreSQL, a service for mobile developers that allows application streaming, new C3 and I2 instance types, and a cross-region snapshot capture capability for its Redshift data warehouse users.

Amazon officials are fond of tallying a total for their innovations each year. At Amazon's Re:Invent show this week in Las Vegas, they kept adding to the total as the show unfolded. Last year's event drew 6,000 attendees, a mix of developers, partners, and customers. This year, reservations were closed 2.5 weeks before the event, since a maximum 9,000 attendees had preregistered.

The creation of the new instance types reflects the growing size of workloads being placed in the AWS Elastic Compute Cloud. C3 will be Amazon's most compute-intensive instance type. These instance types will come in five sizes, powered by modern Zeon E5-2680v2 chips running at 2.8 GHz. The C3 instances will be stored on solid state disk, making for rapid activation when needed.

The base unit will be a large C3 with two virtual CPUs, 3.75 GB RAM, and 32 GB of solid state disk. The two virtual CPUs supply seven of what Amazon calls EC2 compute units. Amazon's ECUs don't relate directly to present day hardware. Instead, they are a unit of measure based on the performance of a 2007-2008 Xeon chip running at 1 GHz. The large C3 is available at 15 cents an hour.

[ Want to learn more about another major new service from AWS? See Amazon Launches Workspaces For Desktop Users. ]

The high end of the C3 instances is equipped with 32 virtual CPUs with 60 GB RAM and 640 GB of solid state disk. The 32 virtual CPUs are the equivalent of 108 EC2 compute units. It's available at $2.40 an hour.

"These are highest performance processors on EC2," said CTO Werner Vogels in a Thursday keynote address. For sheer compute cycles, they give the most bang for the buck over any other instance type, he added.

Amazon added an I/O instance type, the I2, which also depends on large amounts of solid state disk to yield very high I/O rates. Vogels said I2s are capable of reaching up to 175,000 read I/O operations per second and 160,000 write I/O operations per second. In turning to SSDs to speed performance, Amazon is following Rackspace, which announced new cloud servers using SSDs just before Re:Invent, and Digital Ocean, a cloud startup in New Jersey that boasts fast startup times and operations with its SSD-equipped virtual servers.

The I/O instance type reflects a concerted effort on Amazon's part to improve I/O performance for customer workloads, something that has proven hard to predict for many customers. The chief culprit is believed to be the variable efficiency of Elastic Block Store, the storage that supports running applications. Customers complain that an application that frequently runs fast is unaccountably slow at certain times, reflecting contention for I/O channels.

For more than a year, Amazon has offered PIOPs, or provisioned I/O operations per second. Customers pay more, but they can designate an I/O level they wish to be able to achieve at any time and Amazon guarantees they will hit within 10 percent of that mark 99.9 percent of the time. For example, if the customer sought 4,000 I/Os per second, Amazon guarantees at least 3,600 I/O operations 99.9% of the time, according to figures released by Miles Ward, senior manager of solutions architecture, in a Re:Invent session Wednesday. I/O operations are clearly difficult to architect with assurance in the complexity of a multi-tenant cloud, and Amazon has left itself a little wiggle room in case its best effort to deliver exactly what's ordered doesn't quite work out.

AWS also added an ability to create Redshift data warehouse snapshots and save them to a region other than the one in which they were taken. The service allows data warehouse applications to operate in multiple Amazon regions without falling out of synch. It also gives Redshift users a disaster recovery strategy based on the simple process of snapshot capture and updating in a region outside the one where they're generated. Vogels noted that interest in cross-region updates peaked after Hurricane Sandy blacked out much of the East Coast. Amazon's operations at U.S. East in Ashburn, Va., were not affected but many data centers lost power and lost the ability to operate when backup system failed or ran out of fuel.

In addition, AWS told mobile developers they can use its new AppStream service that allows them to stream an application to a variety of end user devices. Amazon officials are trying to attract developers to their service by making it easier for smartphone and tablet application developers to produce an application once, then run it on EC2 making use of AppStream to reach multiple devices. Amazon will take care of hardware and posting updates to the application, while developers concentrate on function and features.

Vogels said availability of the open source PostgreSQL database as a service was one of the most requested additions from the customer base. Amazon already offers Oracle, MySQL, and Microsoft's SQL Server. But PostgreSQL adds an open source ANSI standard database system, something that MySQL doesn't claim to be. So PostgreSQL can be used with full-blown relational database applications that require constant data consistency.

There's no single migration path to the next generation of enterprise communications and collaboration systems and services, and Enterprise Connect delivers what you need to evaluate all the options. Register today and learn about the full range of platforms, services, and applications that comprise modern communications and collaboration systems. Register with code MPIWK and save $200 on the entire event and Tuesday-Thursday conference passes or for a Free Expo pass. It happens in Orlando, Fla., March 17-19.

Read more about:

20132013

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for information and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights