benchmarkingblog

Benchmarking and Systems Performance

The Wizard of OpenWorld

with one comment

Sometimes it’s great to see something for the hundredth time.

On Saturday night I went to see one of the all time greats, The Wizard of Oz — in 3D. The huge IMAX screen and 3D effects pulled you into the movie. I was dancing with the Munchkins and really skipping down that yellow brick road.

And sometimes you just want to cackle and destroy like the Wicked Witch of the West because you are being forced to see something for the hundredth time.

At Oracle OpenWorld’s keynote last night, the industry benchmarks that were highlighted made me want to do just that.

  • Oracle with Fujitsu claimed “14 World #1′s.” Then of course, doing what they do time and again, they only actually discussed a few of them.
  • In the SAP SD 2-tier comparison, Fujitsu/Oracle’s result was from 2013. IBM’s from 2010. Fujitsu/Oracle’s result used 640 cores, IBM only 256. IBM’s result was actually over 2x the users per core of the Oracle/Fujitsu result. We have surely seen this before, ain’t it the truth?(1)
  • The SPECjbb2013 comparison highlighted the M10 against some undesignated x86 system. Like the cowardly lion picking on little Toto.
  • The third benchmark was Stream, relevant for the very few in the commercial world.
  • Larry compared the M6-32 “Big Memory Machine” against a Power System. With absolutely no details and data to back the claim. We’ve seen this over and over as well.
  • Make no doubt about it. Absolutely none of these performance benchmarks have any pricing component whatsoever as a metric. And any pricing that is shown should be analyzed – what storage is included, what maintenance and support costs, is software added in? We’ve seen creative accounting here so many times before.

What was so special about seeing The Wizard of Oz on the big screen in 3D was that you noticed all of these incredible details (like the colorful birds, the beautiful expanse of red poppies, and the stage hand behind the apple trees) that you had never seen before. What was so NOT special about the OpenWorld keynote was that you were seeing the same old story — but with almost no details behind it. Once again.

************************************************

(1) IBM Power 795 (4.00 GHz) two-tier SAP SD Standard Application Benchmark result (SAP enhancement package 4 for SAP ERP 6.0 (Unicode): 32 processors / 256 cores / 1024 threads, POWER7, 4096 GB memory, 126,063 SAP SD benchmark users, OS: AIX 7.1, DB2 9.7. Certification #: 2010046 vs. Fujitsu M10-48 (40 processors / 640 cores / 1280 threads,153,000 SAP SD benchmark users, Oracle. Certification #: 2013014. Source: http://www.sap.com/benchmark. Results as of 9/23/13.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
over the world.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,,

,,

,,,

Written by benchmarkingblog

September 23, 2013 at 8:59 am

Posted in Oracle

Tagged with ,

On Big Data: Count Me In, But Do It Right

with one comment

Our local high school is now offering a new class in introductory statistics. And from what I’ve been seeing lately, we need this like my dog needs rawhide. (You see otherwise he will chew on sticks, rocks, and cement.)

I was recently reviewing some availability statistics. A regulatory group (which shall remain unnamed) was comparing number of outages between different types of equipment. Which is all very fine. The problem was that they were counting numbers of times, not percentage of times. Which means very little when you may have hundreds of instances of one type of equipment — and a total of ONE of another.

Another fallacy was that they were analyzing the 95% of the outages that had to do with one maintenance issue that had recently been solved – so what they really needed to focus on was the other 5% — and especially the outliers.

Another technique that drives me crazy is when someone rounds up when they should round down.

I’m not saying that everyone needs to have a deep understanding of multivariate ANOVA or the like. But with the plethora of Big Data applications and the way data is now woven into our society and in everything we do, it becomes exceedingly important to analyze and understand it in the right way.

We love to say “Do the Math.” But we need to make sure that when we do the math, we use the data in the correct and very best way to solve the problem.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,

Written by benchmarkingblog

September 18, 2013 at 11:42 am

Posted in Big Data

Tagged with

Case of the Missing Benchmark and Other Cisco Tales

leave a comment »

Whether it’s Sherlock Holmes or Nancy Drew, it’s hard not to love mystery stories. It’s so great at the end when you realize, oh my gosh, I should have seen that coming. Or, I’m amazing, of course I saw that coming.

This week there was a large amount of hoopla around the announcement of the Intel Xeon Processor E5-2600 v2 product family. Which is all wonderful. But what is really interesting is Cisco’s new claim of 6 world record benchmarks surrounding the announcement.

Now as we know, Cisco has a history of claiming #1 benchmarks by counting not just current #1 records, but records since the beginning of time. Let’s see a few other tricks that Cisco is using in claiming performance “records” :

  • Oracle E-Business Suite Applications R12 Benchmark — It’s not hard to beat a previous generation of yourself.
  • SPECjbb2013 Benchmark (Java server performance) — Again, claim is essentially over themselves.
  • VMware View Planner Benchmark (desktop virtualization performance) — This is great but how hard is it really to be #1 when you are the only one.

 

But what is really interesting about Cisco’s list of benchmarks is what is missing. You see, Cisco also published an SAP SD 2-tier result but it is noticeably missing from the “world record” list.

Maybe just maybe because it happens to be behind three others — HP, Fujitsu, and IBM.

************************************************

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

VMware, the VMware “boxes” logo and design, Virtual SMP and VMotion are registered trademarks or trademarks (the “Marks”) of VMware, Inc. in the United States and/or other jurisdictions.

SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
over the world.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,,,,,,,

Written by benchmarkingblog

September 12, 2013 at 12:41 pm

Posted in Cisco, Intel

Tagged with , ,

Taking the Wind Out of Oracle’s Sails

with one comment

I don’t always read the sports pages. But lately, with the US Open, the Olympics win for Japan, and college football, how could I not?

And lo and behold — instead of a splashy ad on the front page of the paper, there was an article this week deep into the sports section — about Oracle.

It appears that the Oracle team in the America’s Cup competition was in the news — not for doing well — but for receiving penalties. The penalties, the harshest in America’s Cup history, were imposed for illegally modifying 45-foot catamarans.

One place where we would like to think that “illegal modifications” are also not tolerated is in benchmarking.

Oracle this week claimed performance and price performance leadership based on the Storage Performance Council SPC-2 benchmark. I’m sure that with this being an industry standard benchmark there were no modifications – but that doesn’t mean that there were not some difficulties with comparisons claimed. Here’s what you need to know:

  • The Oracle ZFS Storage ZS3-4 result was just released. The IBM and HP results they compare to are from 2012, a lifetime ago in the benchmarking world.
  • The Oracle storage result used a 2-node cluster and 1.6x the physical capacity of the IBM DS8700 result.(1)
  • A fit for purpose methodology is needed for these storage comparisons – are you running analytics or critical batch processing? Different workloads require different levels of nonfunctional requirements which translate into different types of storage.
  • With storage, it’s essential to compare all the options, including many of the new flash offerings.
  • What is the reliability and support for these storage devices? Instead of just price/performance, make sure you study the real TCO.

 

It matters whether you win or lose. But it also matters how you play the game.

************************************************

(1) Results as of September 10, 2013, for more information go to http://www.storageperformance.org/results SPC-2. Results for Oracle ZFS Storage ZS3-4 are 17,244.22 SPC-2 MBPS™, $22.53 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00067. Results for IBM DS8870 are 15,423.66 SPC-2 MBPS, $131.21 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00062. Results for HP P9500 XP Disk Array are 13,147.87 SPC-2 MBPS, $88.34 SPC-2 Price-Performance. Full results are available at http://www.storageperformance.org/results/benchmark_results_spc2#b00056

SPC Benchmark-1 and SPC Benchmark-2 are trademarks of the Storage Performance Council.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,,,

Written by benchmarkingblog

September 11, 2013 at 3:05 pm

Posted in Oracle, storage

Tagged with , ,

HP Hamming It Up

leave a comment »

It was a cold and snowy day last January when I pulled into the parking lot in Streetsboro, Ohio. I was really pleased to be having lunch unexpectedly with two family members. I was not so pleased with the place we were going to have lunch — what I thought was a “fast food” joint that only served ham.

But oh how different from my expectations — incredibly nice staff, a wonderful place to sit, and the choice of exactly what you wanted on your sandwich — all of which made for an awesome lunch.

So I was excited this morning to see this same organization highlighted in a new press release from HP. Storage upgrades were discussed, along with claims of an amazing performance boost. Seasonal demands for ham would now be able to be addressed.

The problem of course is not with the ham but with the claims and the lack of data:

  • Was the bottleneck actually with the batch window for sales data?
  • Was it only during peak time of the holiday period?
  • Did the upgrade really also reduce transaction processing times?
  • What were the before and after results?
  • The data center refresh also included networking and servers. How were these claims attributed to the storage?
  • How would any improvements compare with other vendor products? How do the Storage Performance Council (SPC) industry standard results stack up?

 

Unfortunately the one footnote merely states “Based on customer results.”

But the most important question of course is will I get my ham sandwich much much faster next time ?

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,,,

 

Written by benchmarkingblog

August 29, 2013 at 12:05 pm

Posted in HP, storage

Tagged with ,

IBM is Tops, Once Again, in Technical Computing

with 2 comments

The TOP500 list of the world’s most powerful supercomputers was just released.

IBM had numerous leading entries in this list. Let us count the ways. IBM had the:

  • Most installed aggregate throughput with over 73.2 Petaflops out of  223 Petaflops (32.8%)  HP had 14.7%. And Oracle. Oracle had .4%. Yes, that’s 0.4%. And IBM has had this Lead for 28 Lists in a row.
  • Most in TOP 10 with 5 ( #3 LLNL-Sequoia BG/Q, #5 ANL-Mira BG/Q, #7 Juelich–JUQUEEN BG/Q, #8 LLNL-Vulcan BG/Q, #9 LRZ-SuperMUC iDataPlex)
  • Most in TOP 20 with 9
  • Most in TOP 100 with 34
  • Fastest system in Europe (Juelich-JUQUEEN BG/Q)
  • Fastest Intel based system (x86-only LRZ-SuperMUC iDataPlex)
  • 22 of 28 most energy-efficient systems (over 2,000 MF/w)
  • TOP500 Jun13

    ************************************************

    Source: http://www.top500.org. Results current as of 6/18/13.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , , , ,,,,,,,,,,,,,,

    Written by benchmarkingblog

    June 18, 2013 at 12:56 pm

    Posted in TOP500

    Tagged with

    The National Security on the T5-4 and Big Data

    with 5 comments

    There’s been a lot of talk the last few days on Big Data and when it’s “right” to capture and use it. Some say it’s a real invasion of privacy. Others realistically point out that it is the best way to counter terrorism.

    Whichever you believe, the important thing is that Big Data is being discussed not just in geeky meetings with IT managers but by everybody. When your neighbor across the street stops trimming his tree branches just to talk to you about it, you know it’s hot stuff.

    So I was particularly interested to see that Oracle just published a new TPC-H data benchmark result on the SPARC T5-4.

    And here is what hits you like a train.

    • Why is this publish at only the 3TB size when all the talk these days is on much larger amounts of data?
    • Why is the Total Storage to Database Size ratio a whopping 29? Talk about overkill on storage to achieve performance. This number is many times the ratio we’ve seen from other results.
    • Why is the memory to database size % a whopping 66.6? Again, much more than you should need and what we normally see.
    • Why are there 192 query streams needed? Most results use many, many fewer. That’s because TPC-H has a limited number of query variations; so when you run a lot of streams, you have a high probability that the same queries will be requested more than once. Oracle is greatly increasing the probability that they will have the results of the queries stored in their cache — which may not be representative of how their product would perform in a truly ad hoc query environment.
    • Why isn’t the configuration available now? Because key elements of the storage are not ready.
    • Why did Oracle once again include extremely minimal support in their pricing? Does $2300 a year sound like what you are paying for software “incident server support” . . . ? You don’t even need to answer this one.

    Comments are welcome at your own risk.

    ************************************************
    (1) Oracle TPC-H of 409,721 QphH@3000GB,$3.94 per QphH,Availability 09/24/13,Oracle Database 11g R2 Enterprise Edition w/Partitioning,SPARC T5 3.6 GHz; Total # of Processors: 4,Total # of Cores: 64,Total # of Threads: 512.
    Source: http://www.tpc.org. Results current as of 6/12/13.
    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,

    ,

    Written by benchmarkingblog

    June 12, 2013 at 3:36 pm

    Posted in SPARC T5, TPC-H

    Tagged with , , ,

    %d bloggers like this: