benchmarkingblog

Elisabeth Stahl on Benchmarking and IT Optimization

Posts Tagged ‘Oracle

Oracle Meets That ’70s Show

leave a comment »

Last week I made the annual spring break pilgrimage to my childhood home in the shadows of the cherry blossoms.

What always strikes me when I visit — and you’ve probably had the same experience — is how nothing, almost nothing, has changed since I lived there four decades ago. Yes, there’s a huge TV with cable now. And a cell phone, though not so smart yet. And an iPad that always needs something done to it. But other than these few new features, the general layout and beauty of the interior is essentially the same.

Which I love. Why get new kitchen cabinets when you can take the beautiful solid wood ones and have them refinished? Why buy new cheap chairs when 50’s Danish Modern is built so well and gorgeous to boot?

But one of the best examples of this retro environment, hands down, has to be the downstairs bathroom. When entering you are transported to the time of Nixon and Sonny and Cher. The colors are tremendous – bright bright yellows and oranges. Big plaid wallpaper. And wicker accessories. A 70’s dream of a bathroom. And you know what — it still looks great. The glamour of everything from the 70’s has returned in full force in this one tiny room.

But some things are not meant to come back. And that includes the way some vendors compare systems and benchmarks.

I recently saw a comparison from Oracle comparing the SPARC T7-1 vs. the IBM Power System S824. It brought me right back to when I started blogging almost ten years ago, when we were all inundated with benchmark flaws. Let’s take a look at some of the details :

  • The tool Oracle used to compare the systems is NOT an industry standard benchmark audited by a third party. It is a tool that can be used by anyone. Oracle ran all tests themselves.
  • The tool used is adapted from the TPC-C benchmark, which Oracle themselves has stated in the past that they feel is dated.
  • The disks used in the systems compared are not the same – HDD vs. SAS.
  • The logs and database files for the IBM test were not run on the IBM system – they were run on a different Oracle system.
  • Solaris 11.3 was used for the logs and database file systems on the Oracle side; Solaris 11.2 was used for the IBM configuration.

 

A photo of my childhood downstairs bathroom was Instagrammed recently. It received 35 likes, over half of them from students at the best design school in the country. That makes sense. Oracle’s benchmark comparisons don’t.

 

************************************************

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,

Written by benchmarkingblog

March 23, 2016 at 10:07 am

Posted in Oracle, POWER8, SPARC

Tagged with , ,

Back in Time with Oracle

with one comment

Some of you may know that this week was a very big one for “Back to the Future” movie fans. On Wednesday, Oct. 21, 2015, at 4:29 p.m., our today caught up to the tomorrow depicted in “Back to the Future, Part II.” In that 1989 film, a DeLorean time machine appears from 30 years in the past.

To those who love time travel, this is a really big deal. Some towns even went so far as to rename themselves to the featured city in the film. Ceremonies worldwide were performed at exactly 4:29PM.

And this reminded me of a benchmark result that was just published today by Oracle on the SAP SD benchmark.

As we move into newer digital workloads, some of the older industry benchmarks have gone by the wayside. Many of us have spent a lot of time analyzing these newer workloads and developing new metrics for them. But one classic benchmark is still extremely appropriate for many of today’s applications – and that is the suite of SAP benchmarks.

But this new Oracle result just published is clearly dated — even though it is a brand new result on a brand new Oracle SPARC system. The IBM Power Systems result with DB2 from over 1 year ago is over 2X better performance per core than this new Oracle SPARC result. (1)

What’s really exciting, unlike this new benchmark result, is that many of the predictions of the future in the “Back to the Future” movie were right on. But I am still waiting for the dog-walking drone.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

(1)IBM Power Enterprise System E870 on the two-tier SAP SD standard application benchmark running SAP enhancement package 5 for the SAP ERP 6.0 application; 8 processors / 80 cores / 640 threads, POWER8; 4.19GHz, 2048 GB memory, 79,750 SD benchmark users, running AIX® 7.1 and DB2® 10.5, dialog response: 0.97 seconds, order line items/hour: 8,722,000, dialog steps/hour: 26,166,000, SAPS: 436,100, Database response time (dialog/update): 0.013 sec / 0.026 sec, CPU utilization: 99%, Cert #2014034 vs. Oracle SPARC T7-2 result of 30,800 users, Average dialog response time: 0.96 seconds, Fully processed order line items/hour: 3,372,000, Dialog steps/hour: 10,116,000, SAPS: 168,600, Average database request time (dialog/update):0.022 sec / 0.047 sec, CPU utilization of central server:98%, Operating system, central server: Solaris 11, RDBMS: Oracle 12c, SAP Business Suite software:SAP enhancement package 5 for SAP ERP 6.0, Certification number: #2015050, SPARC T7-2, 2 processors / 64 cores / 512 threads,SPARC M7 4.133 GHz, 16 KB (D) and 16 KB (I) L1 cache per core, 256 KB (D) L2 cache per 2 cores and 256KB (I) per 4 cores, 64 MB L3 cache per processor, 1024 GB main memory

SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. All other product and service names mentioned are the trademarks of their respective companies.

technorati tags: , , ,,,,,,,,,,,

Written by benchmarkingblog

October 23, 2015 at 12:59 pm

Posted in Oracle, SAP

Tagged with , , , , , ,

Awesome POWER8 Benchmarks, Awesome Dessert

with 3 comments

New frozen yogurt establishments seem to be popping up everywhere. You know, the ones with the cute name, the pink and green decor, the pink and green spoons to match.

A key differentiator in this new wave of stores is the do-it-yourself aspect. But even more extraordinary is the mind-boggling array of toppings. Dozens, in some cases hundreds. I especially love the portfolio of berries that are offered — but my favorite happens to be the small pieces of chocolate that look like rocks.

These stores have pretty much bloomed everywhere these days — whether rural, suburban, or urban area. I had seen them first in Manhattan a couple of years ago; but I knew they had become a true game changer when I located one in, of all places, suburban Poughkeepsie.

IBM today has just formally announced new POWER8 systems, servers that allow data centers to manage staggering data requirements with unprecedented speed, all built on an open server platform. This game-changing infrastructure represents IBM’s singular commitment to providing higher-value, open technologies for the latest types of applications, including cloud, big data and analytics, and mobile and social computing.

Of course, performance is a key factor in this groundbreaking technology. Some of us may have heard about these new systems earlier; but today is the day if you are really into performance — and this dessert is the best part. IBM has just added 6 new #1 benchmarks to the already huge portfolio of existing record benchmarks. Let’s take a look at these for the new IBM Power S824:

 

What’s especially interesting about these 6 is that they represent a wide portfolio of excellence and value in a real world environment — from specific applications that you run everyday, like sales, payroll, and order management, to Java and even technical computing. And these are varied workloads (just like all those berries) from various vendors, including Oracle, that have been shown via popular and well-accepted third party benchmarks to surpass all other systems, including x86.

 

Benchmarks, pick your favorite #1.

Mine is still the chocolate that looks like rocks.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

(1)IBM Power System S824 on the two-tier SAP SD standard application benchmark running SAP enhancement package 5 for the SAP ERP 6.0 application; 4 processors / 24 cores / 192 threads, POWER8; 3.52GHz, 512 GB memory, 21,212 SD benchmark users, running AIX® 7.1 and DB2® 10.5, dialog response: 0.98 seconds, line items/hour: 2,317,330, dialog steps/hour: 6.952,000 SAPS: 115,870 database response time (dialog/update): 0.011 sec / 0.019sec, CPU utilization: 99%, Certification #2014016. Source: http://www.sap.com/benchmark.
(2)The 12-core IBM Power S824 (3.52 GHz) achieved the best 12-core extra-large Oracle E-business 12.1.3 benchmark Payroll batch result (1,090,909 checks per hour). Source: http://www.oracle.com/us/solutions/performance-scalability/index.html
(3)The 6-core IBM Power S824 (4.1 GHz) database server achieved the best overall Siebel CRM 8.1.1.4 result (50,000 users).
Source: http://www.oracle.com/us/solutions/benchmark/white-papers/siebel-167484.html
(4)The 24-core IBM Power S824 (3.52 GHz) db running DB2 10.5 / 24-core IBM Power S824 (3.52 GHz) app running WebSphere 8.5 is the best 24-core SPECjEnterprise2010 configuration (22,543 Enterprise jAppServer Operations Per Second (EjOPS)) . Source: http://www.spec.org
(5)The 24-core IBM Power S824 (3.5 GHz, POWER8) is the best 24-core system (1370 SPECfp_rate2006 result, 24 cores, 4 chips, 6 cores/chip, 8 threads/core). http://www.spec.org
(6)The 24-core IBM Power S824 (3.5 GHz, POWER8) is the best 24-core system (1750 SPECint_rate2006 result, 24 cores, 4 chips, 6 cores/chip, 8 threads/core). http://www.spec.org

All results current as of April 28, 2014.

SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Other names may be trademarks of their respective owners.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

technorati tags: , , ,,,,,,,,,,

,,

Written by benchmarkingblog

April 28, 2014 at 7:24 am

Oracle’s SPARC Enhancements: Construction or Wind ?

leave a comment »

Two nights ago I spent a lovely 6 hours in the airport. Flight cancelled, next plane delayed for incoming aircraft, no runways to be had in one of the largest airports in the country. Announcement 1: There was only one runway because the others were under construction. Announcement 2: There was only one runway that could be used because the wind patterns were strange.

All you want is to get home to your couch and your dog. At the same time it would be great to get the real story on what is happening. Just because you want to know, you want it to make sense.

And that’s exactly how I was feeling again as I read one of Oracle’s recent press releases on the Fujitsu SPARC M10 “enhancements.” The claim was for “15 world records.” I decided to take a look at each one just to know — was it the construction or the wind ?

1. Oracle needed 2.5x more cores/memory than IBM. The IBM result was from 4 years ago.
2. Oracle needed 2x more cores/memory than IBM. The IBM result was from 4 years ago.
3. Oracle compared themselves with themselves.
4. Oracle compared themselves with themselves.
5. Oracle needed 2x more cores than SGI.
6. Oracle compared themselves with themselves.
7. Oracle needed 2x more cores than IBM.
8. Oracle compared themselves with themselves.
9. Oracle needed 4x more cores than IBM.
10. Oracle compared themselves with themselves.
11. Oracle picked on little x86.
12. Oracle compared themselves with themselves.
13. Oracle needed 16x more cores than IBM. The IBM result was from 6 years ago.
14. Oracle needed 8x more cores than IBM. The IBM result was from 6 years ago.
15. Oracle needed 8x more cores than IBM. The IBM result was from 6 years ago.

Also note that there are really only 4 different benchmarks here. And notably all but 2 of these 15 are in the Technical Computing space, using simple component type benchmarks.

So that’s the real story. The other real story is that if I had driven the 500 miles I would have been home much faster.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,

,

Written by benchmarkingblog

April 11, 2014 at 2:49 pm

Posted in SPARC

Tagged with , , ,

Guns and Butter at OpenWorld

with one comment

I guess when you are really really rich you can do things like miss your own keynote to go to a sporting event. Or get prices wrong by millions of dollars.

Yes, I took Econ 1A in college (though I may remember more about the cute boy in the row in front of me than supply and demand). I clearly remember grasping the intricate graphs and complex formulas in the thick colorful book by Samuelson.

But that preparation did not seem to help this week in trying to understand the new Oracle “Economics” at OpenWorld. A quick search did not lead to any scholarly articles on “near linear pricing.” If there is any sort of “re-engineering” of economics going on, it has not been picked up by the MBA programs just yet.

So when you see any pricing comparisons from Oracle these days, here is what you need to know:

  • Sometimes the systems compared have different numbers of processor cores. Sometimes the systems are the same “size” but size does not equal the performance of what can be run on the system.
  • Sometimes the systems compared have different amounts of memory. Sometimes the systems have the same amount of memory but amount of memory does not equal the performance of what can be run on the system.
  • Sometimes Oracle includes no software on their system and includes software on the other vendor’s system.
  • Sometimes Oracle does not include the expensive Oracle database license costs, which by the way are calculated by core.
  • Sometimes the systems compared have very very different types of support and maintenance.
  • Sometimes the systems compared have very different types and amounts of storage included. Or no storage at all. As we know, storage can be a large part of a system’s configuration and price.

There has been absolutely NO substantiation to justify equivalent price configurations for equivalent throughput systems in these comparisons.

What is ultimately important is what non-functional requirements the system gives you at a certain price. Compare, and do the TCO. And tell Oracle: I don’t buy sockets, I buy performance.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,

,,,,,,,

Written by benchmarkingblog

September 25, 2013 at 10:36 am

Posted in Oracle

Tagged with ,

The Wizard of OpenWorld

with one comment

Sometimes it’s great to see something for the hundredth time.

On Saturday night I went to see one of the all time greats, The Wizard of Oz — in 3D. The huge IMAX screen and 3D effects pulled you into the movie. I was dancing with the Munchkins and really skipping down that yellow brick road.

And sometimes you just want to cackle and destroy like the Wicked Witch of the West because you are being forced to see something for the hundredth time.

At Oracle OpenWorld’s keynote last night, the industry benchmarks that were highlighted made me want to do just that.

  • Oracle with Fujitsu claimed “14 World #1’s.” Then of course, doing what they do time and again, they only actually discussed a few of them.
  • In the SAP SD 2-tier comparison, Fujitsu/Oracle’s result was from 2013. IBM’s from 2010. Fujitsu/Oracle’s result used 640 cores, IBM only 256. IBM’s result was actually over 2x the users per core of the Oracle/Fujitsu result. We have surely seen this before, ain’t it the truth?(1)
  • The SPECjbb2013 comparison highlighted the M10 against some undesignated x86 system. Like the cowardly lion picking on little Toto.
  • The third benchmark was Stream, relevant for the very few in the commercial world.
  • Larry compared the M6-32 “Big Memory Machine” against a Power System. With absolutely no details and data to back the claim. We’ve seen this over and over as well.
  • Make no doubt about it. Absolutely none of these performance benchmarks have any pricing component whatsoever as a metric. And any pricing that is shown should be analyzed – what storage is included, what maintenance and support costs, is software added in? We’ve seen creative accounting here so many times before.

What was so special about seeing The Wizard of Oz on the big screen in 3D was that you noticed all of these incredible details (like the colorful birds, the beautiful expanse of red poppies, and the stage hand behind the apple trees) that you had never seen before. What was so NOT special about the OpenWorld keynote was that you were seeing the same old story — but with almost no details behind it. Once again.

************************************************

(1) IBM Power 795 (4.00 GHz) two-tier SAP SD Standard Application Benchmark result (SAP enhancement package 4 for SAP ERP 6.0 (Unicode): 32 processors / 256 cores / 1024 threads, POWER7, 4096 GB memory, 126,063 SAP SD benchmark users, OS: AIX 7.1, DB2 9.7. Certification #: 2010046 vs. Fujitsu M10-48 (40 processors / 640 cores / 1280 threads,153,000 SAP SD benchmark users, Oracle. Certification #: 2013014. Source: http://www.sap.com/benchmark. Results as of 9/23/13.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
over the world.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,,

,,

,,,

Written by benchmarkingblog

September 23, 2013 at 8:59 am

Posted in Oracle

Tagged with ,

Taking the Wind Out of Oracle’s Sails

with one comment

I don’t always read the sports pages. But lately, with the US Open, the Olympics win for Japan, and college football, how could I not?

And lo and behold — instead of a splashy ad on the front page of the paper, there was an article this week deep into the sports section — about Oracle.

It appears that the Oracle team in the America’s Cup competition was in the news — not for doing well — but for receiving penalties. The penalties, the harshest in America’s Cup history, were imposed for illegally modifying 45-foot catamarans.

One place where we would like to think that “illegal modifications” are also not tolerated is in benchmarking.

Oracle this week claimed performance and price performance leadership based on the Storage Performance Council SPC-2 benchmark. I’m sure that with this being an industry standard benchmark there were no modifications – but that doesn’t mean that there were not some difficulties with comparisons claimed. Here’s what you need to know:

  • The Oracle ZFS Storage ZS3-4 result was just released. The IBM and HP results they compare to are from 2012, a lifetime ago in the benchmarking world.
  • The Oracle storage result used a 2-node cluster and 1.6x the physical capacity of the IBM DS8700 result.(1)
  • A fit for purpose methodology is needed for these storage comparisons – are you running analytics or critical batch processing? Different workloads require different levels of nonfunctional requirements which translate into different types of storage.
  • With storage, it’s essential to compare all the options, including many of the new flash offerings.
  • What is the reliability and support for these storage devices? Instead of just price/performance, make sure you study the real TCO.

 

It matters whether you win or lose. But it also matters how you play the game.

************************************************

(1) Results as of September 10, 2013, for more information go to http://www.storageperformance.org/results SPC-2. Results for Oracle ZFS Storage ZS3-4 are 17,244.22 SPC-2 MBPS™, $22.53 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00067. Results for IBM DS8870 are 15,423.66 SPC-2 MBPS, $131.21 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00062. Results for HP P9500 XP Disk Array are 13,147.87 SPC-2 MBPS, $88.34 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00056

SPC Benchmark-1 and SPC Benchmark-2 are trademarks of the Storage Performance Council.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,,,

Written by benchmarkingblog

September 11, 2013 at 3:05 pm

Posted in Oracle, storage

Tagged with , ,

The National Security on the T5-4 and Big Data

with 5 comments

There’s been a lot of talk the last few days on Big Data and when it’s “right” to capture and use it. Some say it’s a real invasion of privacy. Others realistically point out that it is the best way to counter terrorism.

Whichever you believe, the important thing is that Big Data is being discussed not just in geeky meetings with IT managers but by everybody. When your neighbor across the street stops trimming his tree branches just to talk to you about it, you know it’s hot stuff.

So I was particularly interested to see that Oracle just published a new TPC-H data benchmark result on the SPARC T5-4.

And here is what hits you like a train.

  • Why is this publish at only the 3TB size when all the talk these days is on much larger amounts of data?
  • Why is the Total Storage to Database Size ratio a whopping 29? Talk about overkill on storage to achieve performance. This number is many times the ratio we’ve seen from other results.
  • Why is the memory to database size % a whopping 66.6? Again, much more than you should need and what we normally see.
  • Why are there 192 query streams needed? Most results use many, many fewer. That’s because TPC-H has a limited number of query variations; so when you run a lot of streams, you have a high probability that the same queries will be requested more than once. Oracle is greatly increasing the probability that they will have the results of the queries stored in their cache — which may not be representative of how their product would perform in a truly ad hoc query environment.
  • Why isn’t the configuration available now? Because key elements of the storage are not ready.
  • Why did Oracle once again include extremely minimal support in their pricing? Does $2300 a year sound like what you are paying for software “incident server support” . . . ? You don’t even need to answer this one.

Comments are welcome at your own risk.

************************************************
(1) Oracle TPC-H of 409,721 QphH@3000GB,$3.94 per QphH,Availability 09/24/13,Oracle Database 11g R2 Enterprise Edition w/Partitioning,SPARC T5 3.6 GHz; Total # of Processors: 4,Total # of Cores: 64,Total # of Threads: 512.
Source: http://www.tpc.org. Results current as of 6/12/13.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,

,

Written by benchmarkingblog

June 12, 2013 at 3:36 pm

Posted in SPARC T5, TPC-H

Tagged with , , ,

Shoe Fetish or Benchmark Comparison ?

with 5 comments

Last month I visited the Fashion Institute of Technology’s new exhibit “Shoe Obsession.” And for anyone who relishes shoes, this was the place to be. You enter the dark rooms and the glass cases are absolutely glowing in light, highlighting the SHOES. There’s Manolo Blahnik, Christian Louboutin, Prada and many more, as far as the eye can see. Each shoe is made out of a huge array of materials — plastics, metals, beads, ribbons, velvet, even mirrors. Many have 6 inch heels. Or even higher. Gorgeous.

But of course most of these shoes you could never even wear — and not because there’s only one of them. These shoes don’t even make sense as shoes. What ultimately matters is that you can’t do what you need to do with shoes which is walk in them.

Many times I see benchmark comparisons that don’t really focus on the right things as well. Here’s why in comparisons of systems, cores ultimately matter:

  • Cores are the processing units for computation.
  • Cores are used to charge for software licensing.
  • Cores represent a more apples-to-apples method of comparing systems of varying technologies.
  • The right Cores enable efficient virtualization and consolidation which ultimately leads to better total cost of ownership.

Interesting that when these facts are so clear that Oracle’s newest ad on the front page of the Wall Street Journal totally ignores processor cores and many other important components in the comparisons. As you look at the SPECjEnterprise2010 comparisons, here is what you need to know:

  • The IBM benchmark result is from 2012, the Oracle result is brand new. As we know, this is a lifetime of difference for benchmarking.
  • Oracle needed 4x the number of processing cores and 3x the amount of memory than IBM for this benchmark. See all the details here and here.
  • The IBM POWER7+ Power 780 actually has over 1.5x more performance per core than the Oracle SPARC T5 system.(1)
  • Cost is not even a metric of this benchmark. And note that server cost does not include storage and the all expensive software licensing costs, which by the way, are calculated per core.

 

I like shoes and benchmark comparisons which make sense. Give me my New Balance any day. I can walk for miles in them, they look good, and their TCO screams.

Bottom line: Oracle’s latest comparative advertisement targeting IBM Power Systems, like so many before them, strains credulity. Caveat emptor.

************************************************

(1)SPARC T5-8 (8-chip, 128 cores), 27,843.57 SPECjEnterprise2010 EjOPS; IBM Power 780 (8-chips, 32 cores), 10,902.30 SPECjEnterprise2010 EjOPS. Sources: http://www.spec.org. Results current as of 5/23/13.
SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,

,,

Written by benchmarkingblog

May 23, 2013 at 11:45 am

Oracle’s New T5 TPC-C: Where’s the SPARC?, Part II

with 5 comments

With Oracle’s new SPARC server announcement today, we are all still waiting in anticipation (take your pick of Rocky Horror or Carole King) for something exciting. The just released TPC-C benchmark result surely is not.

Here are some reasons why:

  • The performance of the Oracle T5-8 (even with the use of Oracle database partitioning) is downright lackluster. An IBM POWER6 result from 2008, 2 generations ago, is 42% higher per core. An IBM POWER7 result from 2010, 1 generation ago, is 2.2x better performance per core than the Oracle result. (1)
  • The price for all Oracle software support used in computing the price/performance for this benchmark is $2300/year. I can only guess what you get for that.
  • The Oracle database software is not even available until September. Yes, September.
  • It’s keeping me wa a a a aiting . . .

    ************************************************

    (1) IBM Power 780 (2 chips, 8 cores, 32 threads) with IBM DB2 9.5 (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10); IBM Power 595 (5 GHz, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5 (6,085,166 tpmC, $2.81/tpmC, configuration available 12/10/08); vs. Oracle SPARC T5-8 (8 chips, 128 cores, 1024 threads – 8,552,523 tpmC, $.55/tpmC, configuration available 9/25/13).
    Source: http://www.tpc.org. Results current as of 3/26/13.
    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,

    Written by benchmarkingblog

    March 26, 2013 at 2:23 pm

    Posted in Oracle, SPARC T5, TPC-C

    Tagged with , , ,