HP-Oracle Hardware Parallelization ClarifiedHP-Oracle Hardware Parallelization Clarified

Some kind Oracle development managers have reached out and helped me better understand where Oracle does or doesn't stand in query and analytic parallelization. Let's start with the part everybody pretty much knows already: First, there are two parts to a parallelization story - how you get data off of disk, and what you do with it once you have it...

Curt Monash, Contributor

October 1, 2008

5 Min Read
information logo in a gray background | information

Some kind Oracle development managers have reached out and helped me better understand where Oracle does or doesn't stand in query and analytic parallelization. Let's start with the part everybody pretty much knows already:

• There are two parts to a parallelization story - how you get data off of disk, and what you do with it once you have it. • To a first approximation, the best way to get a lot of data off of disk is in parallel, specifically with different CPUs talking to different disk drives. Until last week's announcement of Exadata, Oracle was the most prominent holdout against this view. (That dubious honor now goes to Sybase.)• If processing units are working in parallel to get data off disk, it is optimal to, on the same node that receives the data:

- Do your projections. Meaning, whittle down the data to just the columns you need. (That's if you have a row-based system, as Oracle does; projections are moot in column-based systems.)

- Do your selections. Meaning, execute whichever WHERE clauses can be handled before the joins start.

- Do any joins you can. For example, you may be joining two tables that have been partitioned on the same hash key, or you may be joining a large fact table to a small dimension table, where the latter has been replicated across every node.

• Exadata is the first mainstream Oracle product that follows this optimal storage-facing parallelization strategy. Actually, opinions differ as to whether rigid coupling of processors to specific disks is actually necessary. But after supporting one extreme (the disk part of shared-everything), Oracle with Exadata has gone to the other extreme (the disk part of shared-nothing). Other vendors taking this approach include Teradata and Netezza. Greenplum and Vertica are less extreme.

• After accessing data in parallel and filtering it to the extent possible on the nodes that retrieved it, Oracle then ships the data to a conventional Oracle database server. That server does any further query processing, along with any other analytics.

All that has been pretty clear from the getgo. Less obvious has been: How does the Oracle database server process the data it receives from the Exadata component? In particular, how parallel is the Oracle database server's processing?

It turns out that the answer has little to do with Oracle Real Application Clusters (RAC). Indeed, it has so little to do with RAC that I'm wondering what RAC does to justify its >10% share of overall Oracle Database Machine pricing. In particular, different CPUs generally do not share RAM or cache when doing what Oracle refers to as DSS (Decision Support System - an old term) work. Thus (while I'm still not clear on all the specifics or exceptions!) it is generally fair to say that Oracle's architecture on the database server side is akin to shared-nothing.

Please note that we're talking here about two different pools of CPUs - the ones built into the Exadata part of the system, in charge of talking to their own private disk drives, and the ones in the RAC cluster, which do non-basic query execution, along with the rest of the analytics. Indeed, those two pools of CPUs could be of completely different brands and configurations, although at the moment they are similarly-named HP servers using identical Intel chips. I.e., Oracle has moved into the node heterogeneity camp. By way of contrast, the usual-suspect MPP vendors - Teradata, Netezza, Greenplum, Vertica, Aster Data, Paraccel, Exasol, DATAllegro - do most or all of their subsequent processing on the same nodes that retrieve data. Thus, Oracle is the first major vendor for whom it is important to remember that different parts of a query plan get parallelized across completely distinct sets of processors and processing units.*

*Yes, I know that each Netezza SPU (Snippet Processing Unit) couples a PowerPC chip and an FPGA (Field-Programmable Gate Array). But that's a very different thing from having your data access occur on 14 servers and having the initial results sent to a different set of 8 servers.

So with all that background, I'm finally ready to lay out what I've gleaned about Oracle query and analytic parallelization, whether from public materials or private discussions.

• Commericial database parallelization started in the mid-1990s. Indeed, I was writing about it back in 1994, and it was part of the story in my Sybase Sell recommendation that year. Ken Jacobs et al. explained to me at the time that in the choice between "static" and "dynamic" partitioning and parallelization, Oracle had opted for the "dynamic." I.e., all CPUs could look at all data, but there would be a form of parallel processing anyway.

• Oracle introduced a feature called Parallel Query long ago. Then, with the release of 10g, Oracle said in effect "Now we've removed the prior limitations and gotten Parallel Query right!"

• Oracle tables or queries have to be explicitly enabled for parallelization. Degrees of parallelization (minimum and maximum processors devoted to a task) are also explicitly declared. Naturally, there are defaults and administrative tools that make that all pretty automatic or else easy. The default default, as it were, is for most or all available processors to be used in parallel.

• Oracle also says it has parallelized a broad range of analytic functionality - data mining, OLAP, and so on, including generic UDFs (User-Defined Functions).

So how good is all this parallel technology? On the one hand, we know Oracle has been shipping it for a long time, and has it widely deployed. On the other, we also know that Oracle performance has been very problematic for large parallel queries. Surely most of those problems were due to the shared-disk bottleneck. But were they all (or so close to all as not to matter)? I don't yet know.Some kind Oracle development managers have reached out and helped me better understand where Oracle does or doesn't stand in query and analytic parallelization. Let's start with the part everybody pretty much knows already: First, there are two parts to a parallelization story - how you get data off of disk, and what you do with it once you have it...

Read more about:

20082008

About the Author

Curt Monash

Contributor

Curt Monash has been an industry, product, and/or stock analyst since 1981, specializing in the areas of database management, application development tools, online services, and analytic technologies

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights