benchmarkingblog

Benchmarking and Systems Performance

Posts Tagged ‘SPECjEnterprise

Oracle’s SPARC T5 and M5 Benchmarks: Lather, Rinse, Repeat

with 21 comments

I think I’ve said this before but one of my most absolute favorite movies is Groundhog Day. (Attention: spoiler is coming but since the fricking movie is from 1993 and most of us were old even way back then, I don’t think I will be ruining it for anyone.) Groundhog Day is an American comedy film directed by Harold Ramis and starring Bill Murray and Andie MacDowell (who by the way I’ve been told that I sort of look like which is really cool since she does L’Oréal ads). In the film an arrogant and egocentric TV weatherman, covering the annual Groundhog Day event, finds himself repeating the same day again and again.

The phrase “Groundhog Day” now has entered common lexicon as a reference to an unpleasant situation that continually repeats, or seems to.

And I would say that is exactly what we have with Oracle’s new SPARC T5 and M5 benchmarks.

Just as with every Oracle processor announcement, the benchmark results do the same thing. Many of the claims are Oracle’s own benchmarks that are not published and audited. There are a small number of industry standard benchmarks — and of course these are ones where it is extremely difficult, if not impossible, to compare to other relevant results. For price claims, Oracle, as they’ve done in the past, only factors in the price of the pizza box – make sure you add in the all-important software and storage.

Let’s take a look at the T5 and M5 benchmark results:

  • SAP: The IBM POWER7+ with DB2 10 SAP SD 2-tier result from back in September was 1.3x greater per core than the M5 and 1.9x greater than the T5 result.(1) The IBM average database request time was also much better and the CPU utilization of the IBM system was also more effective.
  • TPC-C: An IBM POWER6 result from 2008, 2 generations ago, is 42% higher per core than the new T5 result on this OLTP benchmark. An IBM POWER7 result from 2010, 1 generation ago, is 2.2x better performance per core than the Oracle result. (2) The price for all Oracle database software support used in computing the price/performance for this benchmark is $2300/year – I can only guess what you get for that. Also note that this benchmark used Oracle Partitioning which may not be realistic for your real world workloads. The Oracle database software is not even available until September.
  • SPECjEnterprise2010: Oracle’s T5 result needed four times the number of database cores, four times the amount of memory and significantly more storage than the IBM POWER7 result. (3)
  • SPECjbb2013: For Java business, let’s run a benchmark that can only be compared with a couple of ProLiants, one of our old T4s, and a Supermicro. (4)
  • SPECcpu: IBM Power Systems is #1 – don’t forget to look at number of cores for integer and floating point claims.
  • TPC-H: Ha, got you. There is no TPC-H. Funny, was expecting one based on what we saw for the T4. I wonder why . . .
  • The other benchmark claims? These are once again ones that either are Oracle’s own benchmarks or ones nobody cares about because they don’t look like anything we actually run. Chance of departure from useful benchmark results: 100%.
  • Don’t let these claims distract from asking about the business value delivered by these systems.

    I wake up every day, right here, right in Cleveland, and it’s always snowing, and there’s nothing I can do about it. “Winter, slumbering in the open air, wears on its smiling face a dream… of spring.”

    ************************************************
    (1)IBM Power 780 (3.72 GHz) two-tier SAP SD Standard Application Benchmark result (SAP enhancement package 5 for the SAP ERP 6.0 application: 12 processors / 96 cores / 384 threads, POWER7+, 1536 GB memory, 57,024 SD benchmark users, running AIX® 7.1 and DB2® 10, dialog resp.: 0.98s, line items/hour: 6,234,330, Dialog steps/hour: 18,703,000, SAPS: 311,720, DB time (dialog/ update): 0.009s / 0.014s, CPU utilization: 99%, Certification #2012033

    Oracle SPARC Server M5-32 SAP SD 2-tier result of 85,050 users, Average dialog response time: 0.80 seconds, Fully processed order line items per hour: 9,452,000,Dialog steps per hour: 28,356,000,SAPS: 472,600,Average database request time (dialog/update): 0.018 sec / 0.044 sec,CPU utilization of central server: 82%,Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0,32 processors / 192 cores / 1536 threads,SPARC M5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and128 KB L2 cache per core, 48 MB L3 cache per processor,4096 GB main memory,Certification #2013009

    Oracle SPARC Server T5-8 SAP SD 2-tier result of 40,000 users,Average dialog response time: 0.86 seconds,Fully processed order line items per hour: 4,419,000,Dialog steps per hour: 13,257,000,SAPS: 220,950,Average database request time (dialog/update): 0.049 sec / 0.131 sec,CPU utilization of central server: 88%, Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0, 8 processors / 128 cores / 1024 threads,SPARC T5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and 128 KB L2 cache per core, 8 MB L3 cache per processor,2048 GB main memory,Certification #2013008.

    (2) IBM Power 780 (2 chips, 8 cores, 32 threads) with IBM DB2 9.5 (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10); IBM Power 595 (5 GHz, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5 (6,085,166 tpmC, $2.81/tpmC, configuration available 12/10/08); vs. Oracle SPARC T5-8 (8 chips, 128 cores, 1024 threads – 8,552,523 tpmC, $.55/tpmC, configuration available 9/25/13).

    (3) WebSphere Application Server V7 on IBM Power 780 and DB2 on IBM Power 750 Express, (64 core app server, 32 core db server), 16,646.34 SPECjEnterprise2010 EjOPS vs. SPARC T5-8 server (SPARC T5-8 server base package, 8x SPARC T5 16-core processors, 128x16GB-1066 DIMMS, 2x600GB 10K RPM 2.5” SAS-2 HDD result of SPARC T5-8, 57,422.17 SPECjEnterprise2010 EjOPS.

    (4) http://www.oracle.com/us/solutions/performance-scalability/sparc-t5-2-specjbb2013-1925099.html

    Sources: http://www.spec.org, http://www.tpc.org, http://www.sap.com. Results current as of 3/26/13.

    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
    logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
    over the world.

    SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,,

    Written by benchmarkingblog

    March 26, 2013 at 5:53 pm

    SPARC T4 to the core

    with 5 comments

    Yesterday I went apple picking in rural Ohio. That makes sense.

    It’s not something that most people associate with California even though California is actually one of the top apple-producing states. But it works rather well for this SPARC analysis.

    I usually love apple picking – with the doomed sun of autumn, the crunchy sweetness of the fruit, the dog wolfing down the cores. But there were certain aspects of my trip yesterday that were plainly unimpressive.

    Sort of like the latest SPARC T4 benchmark results announced by Oracle today:

  • Oracle claimed nine T4 world records. 7 of the 9 are not industry standard benchmarks but Oracle’s own benchmarks, most based on internal testing. Sort of like when we called the orchard and they said that many varieties were available for picking. When we got there, only a few could really be picked. Where was that renowned low hanging fruit?
  • Some Oracle claims compared the new T4 results with previous benchmark versions, never a good idea. Like encouraging your kids to climb on the fruit-bearing trees. Some results compared Oracle to Oracle. If you read carefully, some didn’t compare to anything.
  • Oracle claimed a “generational increase in performance” over previous versions. Note that this claim (which has no published benchmarks behind it) focuses on single threaded applications – how many of those do you have? And you can easily get a 5x improvement when you start from a very very small seed.
  • Oracle’s SPECjEnterprise2010 Java T4 benchmark result, which was highlighted, needed four times the number of app nodes, twice the number of cores, almost four times the amount of memory and significantly more storage than the IBM POWER7 result.(1) Oracle’s price performance and space metric claims (which are not even official benchmark metrics) were calculated only for the application tier of this benchmark, basically ignoring the all important database server, software and storage. Sort of like eating only the pulp of the apple and ignoring all the vitamins in the skin.
  • Oracle’s T4 TPC-H 1TB BI benchmark result, another benchmark which was highlighted, actually had a longer load time than the IBM result from last year. Oracle’s storage use was ludicrous, like the number of apples my Labrador ended up eating; Oracle’s total storage needed to the database size ratio was 10.80 compared to the IBM value of 3.97. Oracle needed 128 streams of queries, IBM only 9. And make sure to note the extremely low and unrealistic Oracle maintenance costs used to get to the price performance number.(2)
  • The range and results of these benchmarks are ultimately disappointing. Instead of making a wonderful pie and apple rings last night, we swept up chips of dried orchard mud in the dark.

    ************************************************

    (1)Oracle WebLogic Server 11g and Oracle Database 11g Release 2 with Oracle Real Application Clusters and Oracle Solaris running on a four-node SPARC T4-4 cluster, each system with four SPARC T4 3GHz processors, (128 core app server, 64 core db server), 40,104.86 SPECjEnterprise2010 EjOPS vs. WebSphere Application Server V7 on IBM Power 780 and DB2 on IBM Power 750 Express, (64 core app server, 32 core db server), 16,646.34 SPECjEnterprise2010 EjOPS.
    (2)SPARC T4-4 server (4 sockets/32 cores/256 threads) 201,487 QphH@1000GB, $4.60/QphH@1000GB, available 10/30/11. IBM POWER 780 Model 9179-MHB server (8 sockets/32 cores/128 threads) 164,747.2 QphH@1000GB, $6.85/QphH@1000GB, available 3/31/11.
    Sources: http://www.spec.org, http://www.tpc.org. Results current as of 9/26/11.

    SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , , , ,,,,,,,,,,,,,

    Written by benchmarkingblog

    September 26, 2011 at 9:43 pm

    Cisco/Oracle Surgery on SPECjEnterprise

    leave a comment »

    Last month I had gum surgery. It’s funny how they market it to you. You only hear about how wonderful it will be for you. If you ever knew about some of the specifics of recovery, you might never do it. Ice packs, tea bags, and a lot of Motrin. A big bandage in your mouth and stitches that take weeks to come out. Sometimes you wonder if all of that was really worth it.

    And that’s how I felt about Oracle’s new press release claiming a “world-record” Cisco/Oracle SPECjEnterprise2010 Java benchmark result. In order to get an astounding 3% or so lead, here’s what you need to know Cisco and Oracle had to do:

  • They had to use two application systems instead of one – two footprints, two servers with connections to manage.
  • They used only 4 JEE instances per server – how realistic is that?
  • They had to use a WebLogic version that isn’t even available until May.
  • They needed much more storage.
  • IBM is #1 in this benchmark for single node results, surpassing Oracle by 76%. (1)

  • One of the most common questions I got after the gum surgery is whether I lost any weight. Surely this must be a great way to diet. Alas, no — during recovery my two major food groups were pasta and tapioca pudding.

    ************************************************

    (1)SPECjEnterprise2010 result with WebSphere Application Server V7 on IBM Power 780 (64 cores, 8 chips) and DB2 9.7 on IBM Power 750 Express (32 cores, 4 chips) of 16,646.34 EjOPS. Oracle WebLogic Server 11g and Oracle Database 11g Release 2 on two Cisco Unified Computing System B440 MI Blade Servers, 17,301.86.44SPECjEnterprise2010 EjOPS. Oracle WebLogic Server Standard Edition 10.3.3 on Oracle SPARC T3-4 (4 chips, 64 cores app, 2 chips, 32 cores db) 9,456 SPECjEnterprise2010 EjOPS.
    Source: http://www.spec.org. Results current as of 3/17/11.

    SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , , , ,,,,,,,,,,,

    Written by benchmarkingblog

    March 17, 2011 at 11:56 am

    %d bloggers like this: