benchmarkingblog

Elisabeth Stahl on Benchmarking and IT Optimization

IBM is Tops, Once Again, in Technical Computing

with 2 comments

The TOP500 list of the world’s most powerful supercomputers was just released.

IBM had numerous leading entries in this list. Let us count the ways. IBM had the:

  • Most installed aggregate throughput with over 73.2 Petaflops out of  223 Petaflops (32.8%)  HP had 14.7%. And Oracle. Oracle had .4%. Yes, that’s 0.4%. And IBM has had this Lead for 28 Lists in a row.
  • Most in TOP 10 with 5 ( #3 LLNL-Sequoia BG/Q, #5 ANL-Mira BG/Q, #7 Juelich–JUQUEEN BG/Q, #8 LLNL-Vulcan BG/Q, #9 LRZ-SuperMUC iDataPlex)
  • Most in TOP 20 with 9
  • Most in TOP 100 with 34
  • Fastest system in Europe (Juelich-JUQUEEN BG/Q)
  • Fastest Intel based system (x86-only LRZ-SuperMUC iDataPlex)
  • 22 of 28 most energy-efficient systems (over 2,000 MF/w)
  • TOP500 Jun13

    ************************************************

    Source: http://www.top500.org. Results current as of 6/18/13.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , , , ,,,,,,,,,,,,,,

    Written by benchmarkingblog

    June 18, 2013 at 12:56 pm

    Posted in TOP500

    Tagged with

    The National Security on the T5-4 and Big Data

    with 5 comments

    There’s been a lot of talk the last few days on Big Data and when it’s “right” to capture and use it. Some say it’s a real invasion of privacy. Others realistically point out that it is the best way to counter terrorism.

    Whichever you believe, the important thing is that Big Data is being discussed not just in geeky meetings with IT managers but by everybody. When your neighbor across the street stops trimming his tree branches just to talk to you about it, you know it’s hot stuff.

    So I was particularly interested to see that Oracle just published a new TPC-H data benchmark result on the SPARC T5-4.

    And here is what hits you like a train.

    • Why is this publish at only the 3TB size when all the talk these days is on much larger amounts of data?
    • Why is the Total Storage to Database Size ratio a whopping 29? Talk about overkill on storage to achieve performance. This number is many times the ratio we’ve seen from other results.
    • Why is the memory to database size % a whopping 66.6? Again, much more than you should need and what we normally see.
    • Why are there 192 query streams needed? Most results use many, many fewer. That’s because TPC-H has a limited number of query variations; so when you run a lot of streams, you have a high probability that the same queries will be requested more than once. Oracle is greatly increasing the probability that they will have the results of the queries stored in their cache — which may not be representative of how their product would perform in a truly ad hoc query environment.
    • Why isn’t the configuration available now? Because key elements of the storage are not ready.
    • Why did Oracle once again include extremely minimal support in their pricing? Does $2300 a year sound like what you are paying for software “incident server support” . . . ? You don’t even need to answer this one.

    Comments are welcome at your own risk.

    ************************************************
    (1) Oracle TPC-H of 409,721 QphH@3000GB,$3.94 per QphH,Availability 09/24/13,Oracle Database 11g R2 Enterprise Edition w/Partitioning,SPARC T5 3.6 GHz; Total # of Processors: 4,Total # of Cores: 64,Total # of Threads: 512.
    Source: http://www.tpc.org. Results current as of 6/12/13.
    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,

    ,

    Written by benchmarkingblog

    June 12, 2013 at 3:36 pm

    Posted in SPARC T5, TPC-H

    Tagged with , , ,

    Shoe Fetish or Benchmark Comparison ?

    with 5 comments

    Last month I visited the Fashion Institute of Technology’s new exhibit “Shoe Obsession.” And for anyone who relishes shoes, this was the place to be. You enter the dark rooms and the glass cases are absolutely glowing in light, highlighting the SHOES. There’s Manolo Blahnik, Christian Louboutin, Prada and many more, as far as the eye can see. Each shoe is made out of a huge array of materials — plastics, metals, beads, ribbons, velvet, even mirrors. Many have 6 inch heels. Or even higher. Gorgeous.

    But of course most of these shoes you could never even wear — and not because there’s only one of them. These shoes don’t even make sense as shoes. What ultimately matters is that you can’t do what you need to do with shoes which is walk in them.

    Many times I see benchmark comparisons that don’t really focus on the right things as well. Here’s why in comparisons of systems, cores ultimately matter:

    • Cores are the processing units for computation.
    • Cores are used to charge for software licensing.
    • Cores represent a more apples-to-apples method of comparing systems of varying technologies.
    • The right Cores enable efficient virtualization and consolidation which ultimately leads to better total cost of ownership.

    Interesting that when these facts are so clear that Oracle’s newest ad on the front page of the Wall Street Journal totally ignores processor cores and many other important components in the comparisons. As you look at the SPECjEnterprise2010 comparisons, here is what you need to know:

    • The IBM benchmark result is from 2012, the Oracle result is brand new. As we know, this is a lifetime of difference for benchmarking.
    • Oracle needed 4x the number of processing cores and 3x the amount of memory than IBM for this benchmark. See all the details here and here.
    • The IBM POWER7+ Power 780 actually has over 1.5x more performance per core than the Oracle SPARC T5 system.(1)
    • Cost is not even a metric of this benchmark. And note that server cost does not include storage and the all expensive software licensing costs, which by the way, are calculated per core.

     

    I like shoes and benchmark comparisons which make sense. Give me my New Balance any day. I can walk for miles in them, they look good, and their TCO screams.

    Bottom line: Oracle’s latest comparative advertisement targeting IBM Power Systems, like so many before them, strains credulity. Caveat emptor.

    ************************************************

    (1)SPARC T5-8 (8-chip, 128 cores), 27,843.57 SPECjEnterprise2010 EjOPS; IBM Power 780 (8-chips, 32 cores), 10,902.30 SPECjEnterprise2010 EjOPS. Sources: http://www.spec.org. Results current as of 5/23/13.
    SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,

    ,,

    Written by benchmarkingblog

    May 23, 2013 at 11:45 am

    On Investing in the Cloud

    leave a comment »

    It’s 7AM on a weekday morning and I’m in the middle of nowhere in the middle of Ohio. And as I drive by the faded barns, the cattle, and the one town with the giant McDonald’s and the adult bookstore, I start to laugh.

    I’m alone in the car. I’m supposed to be thinking deep thoughts about what I will discuss at a client meeting in an hour or two. I don’t even let myself listen to music in case it distracts me. But I can’t keep from cracking up.

    You see, it’s a billboard. Specifically the words on the billboard. An advertisement for a store. And not just any store. This is for, of all things, Grandpa’s Cheesebarn.

    I don’t know why it’s so funny. It reminds me of my grandpa in a plaid robe and slippers. Maybe even eating cheese. Or smelling like cheese. In a barn. (disclosure – It actually looks like a great place to buy many cool foods and I promise to stop there next time.)

    Anyway, the name is really really really funny.

    Which reminded me of an article I read this morning on an investment manager’s thoughts on cloud computing.

    Here’s the reality and what, of course, all of us who are actually in IT already know:

    • Private, public, and hybrid clouds all have their places. Some applications are best in an organization’s private cloud. Sometimes applications do well in a public cloud. And sometimes hybrids are the perfect solution.
    • Best fit for these options depends on many things including security, reliability, availability, performance.
    • Guess what ? Clouds are actually backed by something real — called servers.
    • IBM and other IT companies offer many products and services. Hardware is one piece. Software is another. Complex transformation services are another. All are valuable in their own way and integrate to make IT solutions for clients that end up running important businesses for all of us.

     

    This morning I realized that taking advice from an investment manager on cloud computing is like trying to get an oil change at Grandpa’s Cheesebarn. You just shouldn’t.

    ************************************************
    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,

    ,,,,

    Written by benchmarkingblog

    April 23, 2013 at 11:49 am

    Posted in Cloud

    Tagged with

    On Fencing Claims and Real World Benchmarks

    leave a comment »

    I am so intimately familiar with fencing it’s not funny.

    I’ve had épées, foils, and even sabers at my house. I’ve been to Junior Olympics (as a spectator, of course). My washer has seen fencing jackets, knickers, and a various assortment of brightly colored and very sweaty high socks.

    I grew up with a wooden hitching post fence in my yard. My current neighbor has a white picket fence that I look at every day. There is a chain link fence in the back so my black lab can’t get into trouble.

    But the fencing I really wanted to talk about is the fencing of claims.

    OK, so maybe if a metric is not an overall #1 it does make sense to look at it in a slightly different way. I get per core. I get single system. I get #1 for a particular subset of a benchmark suite. But I would say it has just about gone too far.

    A claim I saw the other day was fenced not by a piece of hardware, not a particular type of system, not a benchmark category. This claim was a “world record” for a very specific enhancement package of a very specific version of a very specific type of application software. It’s like saying I am the #1 grape picker in the world with purple eyes wearing yellow pants with green stripes on them. Oh, and pink stilettos. Oh and by the way, those grapes are actually raisins.

    Enough already with playing these segmentation games.

    Many times what really matters most is how your specific workload performs on a specific system.

    spedometer

    For that I would recommend

     

     

    ************************************************
    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,,

    Written by benchmarkingblog

    April 15, 2013 at 4:20 pm

    Posted in SAP

    Tagged with ,

    Moonshot vs. Metrics

    with 2 comments

    Some of you may know that last week I was on some college visits. I love this time of info sessions, tours, cafeterias, classrooms, and dorms. I even got to stay in a real dorm one night. Funny, it was not any dorm I had ever been in, not at a college that I had ever been at, and not even a section of a city I had ever been to – but the cooking smells brought me right back to senior year.

    Anyway, one thing I’ve noticed with these visits is how important statistics are. At first I was so tired of asking/hearing the same questions on metrics. I mean, isn’t it the feel of the campus that matters? But I found that the really key questions are ones like these: What percent of freshmen live in dorms, what is the student to professor ratio, what is the placement rate 6 months after graduation in the field of study? The answers to these questions which are backed by real data really matter because in the end that’s what the big bucks are paying for.

    I’ve been feeling that way this week as I look at some of the latest IT news. I see Moonshot claims, backed by “internal HP engineering.” I see Fujitsu M10 and Oracle faster performance claims that don’t even go that far. If I don’t have any data maybe if I don’t include a footnote nobody will notice.

    Meanwhile, IBM this week published a new #1 SAP Sales and Distribution 3-tier benchmark result. 266K users, over 1.4M SAPS, over 29M line items per hour, over 88M dialog steps per hour, on the POWER7+ IBM Power 780 with DB2 10.5.(1) With more metrics available than you probably even want to know about.

    I hope HP’s alternative thinking with Moonshot was referring to the United States Apollo program — and not the abortive Soviet moonshot or the defunct beer with caffeine.

    ************************************************
    (1)IBM Power 780 three-tier SAP SD standard application benchmark on SAP enhancement package 5 for SAP ERP 6.0 achieved 266,000 SAP SD benchmark users. Configuration: 8 processors / 64 cores / 256 threads, POWER7+ 3.72 GHz, 512 GB memory, running AIX 7.1, DB2® 10.5; dialog resp.: 0.84s, line items/hour: 29,433,670, dialog steps/hour: 88,301,000, SAPS: 1,471,680, DB time (dialog/ update): .036s/.061s, DB CPU utilization: 97%, average application server CPU utilization: 88%. Certification #2013010.
    Source: http://www.sap.com. Results current as of 4/10/13.

    SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
    logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
    over the world.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,,

    Written by benchmarkingblog

    April 10, 2013 at 12:03 pm

    Posted in POWER7, SAP

    Tagged with , ,

    Oracle’s SPARC T5 and M5 Benchmarks: Lather, Rinse, Repeat

    with 21 comments

    I think I’ve said this before but one of my most absolute favorite movies is Groundhog Day. (Attention: spoiler is coming but since the fricking movie is from 1993 and most of us were old even way back then, I don’t think I will be ruining it for anyone.) Groundhog Day is an American comedy film directed by Harold Ramis and starring Bill Murray and Andie MacDowell (who by the way I’ve been told that I sort of look like which is really cool since she does L’Oréal ads). In the film an arrogant and egocentric TV weatherman, covering the annual Groundhog Day event, finds himself repeating the same day again and again.

    The phrase “Groundhog Day” now has entered common lexicon as a reference to an unpleasant situation that continually repeats, or seems to.

    And I would say that is exactly what we have with Oracle’s new SPARC T5 and M5 benchmarks.

    Just as with every Oracle processor announcement, the benchmark results do the same thing. Many of the claims are Oracle’s own benchmarks that are not published and audited. There are a small number of industry standard benchmarks — and of course these are ones where it is extremely difficult, if not impossible, to compare to other relevant results. For price claims, Oracle, as they’ve done in the past, only factors in the price of the pizza box – make sure you add in the all-important software and storage.

    Let’s take a look at the T5 and M5 benchmark results:

  • SAP: The IBM POWER7+ with DB2 10 SAP SD 2-tier result from back in September was 1.3x greater per core than the M5 and 1.9x greater than the T5 result.(1) The IBM average database request time was also much better and the CPU utilization of the IBM system was also more effective.
  • TPC-C: An IBM POWER6 result from 2008, 2 generations ago, is 42% higher per core than the new T5 result on this OLTP benchmark. An IBM POWER7 result from 2010, 1 generation ago, is 2.2x better performance per core than the Oracle result. (2) The price for all Oracle database software support used in computing the price/performance for this benchmark is $2300/year – I can only guess what you get for that. Also note that this benchmark used Oracle Partitioning which may not be realistic for your real world workloads. The Oracle database software is not even available until September.
  • SPECjEnterprise2010: Oracle’s T5 result needed four times the number of database cores, four times the amount of memory and significantly more storage than the IBM POWER7 result. (3)
  • SPECjbb2013: For Java business, let’s run a benchmark that can only be compared with a couple of ProLiants, one of our old T4s, and a Supermicro. (4)
  • SPECcpu: IBM Power Systems is #1 – don’t forget to look at number of cores for integer and floating point claims.
  • TPC-H: Ha, got you. There is no TPC-H. Funny, was expecting one based on what we saw for the T4. I wonder why . . .
  • The other benchmark claims? These are once again ones that either are Oracle’s own benchmarks or ones nobody cares about because they don’t look like anything we actually run. Chance of departure from useful benchmark results: 100%.
  • Don’t let these claims distract from asking about the business value delivered by these systems.

    I wake up every day, right here, right in Cleveland, and it’s always snowing, and there’s nothing I can do about it. “Winter, slumbering in the open air, wears on its smiling face a dream… of spring.”

    ************************************************
    (1)IBM Power 780 (3.72 GHz) two-tier SAP SD Standard Application Benchmark result (SAP enhancement package 5 for the SAP ERP 6.0 application: 12 processors / 96 cores / 384 threads, POWER7+, 1536 GB memory, 57,024 SD benchmark users, running AIX® 7.1 and DB2® 10, dialog resp.: 0.98s, line items/hour: 6,234,330, Dialog steps/hour: 18,703,000, SAPS: 311,720, DB time (dialog/ update): 0.009s / 0.014s, CPU utilization: 99%, Certification #2012033

    Oracle SPARC Server M5-32 SAP SD 2-tier result of 85,050 users, Average dialog response time: 0.80 seconds, Fully processed order line items per hour: 9,452,000,Dialog steps per hour: 28,356,000,SAPS: 472,600,Average database request time (dialog/update): 0.018 sec / 0.044 sec,CPU utilization of central server: 82%,Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0,32 processors / 192 cores / 1536 threads,SPARC M5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and128 KB L2 cache per core, 48 MB L3 cache per processor,4096 GB main memory,Certification #2013009

    Oracle SPARC Server T5-8 SAP SD 2-tier result of 40,000 users,Average dialog response time: 0.86 seconds,Fully processed order line items per hour: 4,419,000,Dialog steps per hour: 13,257,000,SAPS: 220,950,Average database request time (dialog/update): 0.049 sec / 0.131 sec,CPU utilization of central server: 88%, Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0, 8 processors / 128 cores / 1024 threads,SPARC T5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and 128 KB L2 cache per core, 8 MB L3 cache per processor,2048 GB main memory,Certification #2013008.

    (2) IBM Power 780 (2 chips, 8 cores, 32 threads) with IBM DB2 9.5 (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10); IBM Power 595 (5 GHz, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5 (6,085,166 tpmC, $2.81/tpmC, configuration available 12/10/08); vs. Oracle SPARC T5-8 (8 chips, 128 cores, 1024 threads – 8,552,523 tpmC, $.55/tpmC, configuration available 9/25/13).

    (3) WebSphere Application Server V7 on IBM Power 780 and DB2 on IBM Power 750 Express, (64 core app server, 32 core db server), 16,646.34 SPECjEnterprise2010 EjOPS vs. SPARC T5-8 server (SPARC T5-8 server base package, 8x SPARC T5 16-core processors, 128x16GB-1066 DIMMS, 2x600GB 10K RPM 2.5” SAS-2 HDD result of SPARC T5-8, 57,422.17 SPECjEnterprise2010 EjOPS.

    (4) http://www.oracle.com/us/solutions/performance-scalability/sparc-t5-2-specjbb2013-1925099.html

    Sources: http://www.spec.org, http://www.tpc.org, http://www.sap.com. Results current as of 3/26/13.

    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    SAP, mySAP and other SAP product and service names mentioned herein as well as their respective
    logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all
    over the world.

    SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,,,,,

    Written by benchmarkingblog

    March 26, 2013 at 5:53 pm

    Oracle’s New T5 TPC-C: Where’s the SPARC?, Part II

    with 5 comments

    With Oracle’s new SPARC server announcement today, we are all still waiting in anticipation (take your pick of Rocky Horror or Carole King) for something exciting. The just released TPC-C benchmark result surely is not.

    Here are some reasons why:

  • The performance of the Oracle T5-8 (even with the use of Oracle database partitioning) is downright lackluster. An IBM POWER6 result from 2008, 2 generations ago, is 42% higher per core. An IBM POWER7 result from 2010, 1 generation ago, is 2.2x better performance per core than the Oracle result. (1)
  • The price for all Oracle software support used in computing the price/performance for this benchmark is $2300/year. I can only guess what you get for that.
  • The Oracle database software is not even available until September. Yes, September.
  • It’s keeping me wa a a a aiting . . .

    ************************************************

    (1) IBM Power 780 (2 chips, 8 cores, 32 threads) with IBM DB2 9.5 (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10); IBM Power 595 (5 GHz, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5 (6,085,166 tpmC, $2.81/tpmC, configuration available 12/10/08); vs. Oracle SPARC T5-8 (8 chips, 128 cores, 1024 threads – 8,552,523 tpmC, $.55/tpmC, configuration available 9/25/13).
    Source: http://www.tpc.org. Results current as of 3/26/13.
    TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,

    Written by benchmarkingblog

    March 26, 2013 at 2:23 pm

    Posted in Oracle, SPARC T5, TPC-C

    Tagged with , , ,

    New Oracle M5 and T5 SAP Benchmark Results: No SPARC at all

    with 14 comments

    If you were hoping for some Last Friday Night excitement from Oracle’s new SPARC servers announcement this week, we haven’t seen it yet. Oracle just this morning published two SAP SD 2-tier benchmark results, one on the M5-32 and one on the T5-8.

    The IBM POWER7+ with DB2 10 result from back in September was 1.3x greater per core than the M5 and 1.9x greater than the T5 result.(1) The IBM average database request time was also much better and the CPU utilization of the IBM system was also more effective.

    Will the sun come out tomorrow for Oracle?

    ************************************************

    (1)IBM Power 780 (3.72 GHz) two-tier SAP SD Standard Application Benchmark result (SAP enhancement package 5 for the SAP ERP 6.0 application: 12 processors / 96 cores / 384 threads, POWER7+, 1536 GB memory, 57,024 SD benchmark users, running AIX® 7.1 and DB2® 10, dialog resp.: 0.98s, line items/hour: 6,234,330, Dialog steps/hour: 18,703,000, SAPS: 311,720, DB time (dialog/ update): 0.009s / 0.014s, CPU utilization: 99%, Certification #2012033

    Oracle SPARC Server M5-32 SAP SD 2-tier result of 85,050 users, Average dialog response time: 0.80 seconds, Fully processed order line items per hour: 9,452,000,Dialog steps per hour: 28,356,000,SAPS: 472,600,Average database request time (dialog/update): 0.018 sec / 0.044 sec,CPU utilization of central server: 82%,Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0,32 processors / 192 cores / 1536 threads,SPARC M5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and128 KB L2 cache per core, 48 MB L3 cache per processor,4096 GB main memory,Certification #2013009

    Oracle SPARC Server T5-8 SAP SD 2-tier result of 40,000 users,Average dialog response time: 0.86 seconds,Fully processed order line items per hour: 4,419,000,Dialog steps per hour: 13,257,000,SAPS: 220,950,Average database request time (dialog/update): 0.049 sec / 0.131 sec,CPU utilization of central server: 88%, Operating system, central server: Solaris 11,RDBMS: Oracle 11g,SAP Business Suite software: SAP enhancement package 5 for SAP ERP 6.0, 8 processors / 128 cores / 1024 threads,SPARC T5, 3.60 GHz, 16 KB (D) and 16 KB (I) L1 cache and 128 KB L2 cache per core, 8 MB L3 cache per processor,2048 GB main memory,Certification #2013008.

    Source: http://www.sap.com; Results current as of 03/25/12.

    SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Other names may be trademarks of their respective owners.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    technorati tags: , , ,,,,,,,,,,,

    Written by benchmarkingblog

    March 25, 2013 at 12:28 pm

    Posted in Oracle, SAP, SPARC T5

    Tagged with , , , , ,

    Cisco Virtualization, the Price You Pay

    with one comment

    We were lucky enough recently to have two comparable Cisco SAP SD benchmark results published. The results used pretty much the same hardware, the same software, the same benchmark kit. The big difference was that one was virtualized and one was not.

    The performance metric for this benchmark is the number of SAP benchmark users. For the regular configuration the number of users was 6530. The virtualized version was 1000 less.(1)

    That’s a considerable difference when it comes to running your business.

    Compare that to the legacy results of PowerVM performance on this very same benchmark. The results of both the virtualized and non-virtualized versions are essentially the same.(2)

    Power Systems servers implement a virtualization architecture with components embedded in the hardware, firmware and operating system software. The capabilities of this integrated virtualization architecture are thus significantly different and in many areas more advanced.

    Without paying the performance price.

    ************************************************

    (1) SAP SD 2-tier Cisco UCS B200 M3, 2 processors / 16 cores / 32 threads, Intel Xeon Processor E5-2690, 2.90 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 20 MB L3 cache per processor, 256 GB main memory; SAP SD benchmark users: 6,530 , Average dialog response time: 0.98 seconds, Throughput: Fully processed order line items/hour: 713,670, Dialog steps/hour: 2,141,000, SAPS: 35,680. Average database request time (dialog/update): 0.015 sec / 0.036 sec, CPU utilization: 99%, Red Hat Enterprise Linux 6.3, Sybase ASE 15.7, SAP enhancement package 5 for SAP ERP 6.0, Certification #2013001.
    SAP SD 2-tier Cisco UCS B200 M3, 2 processors / 16 cores / 32 threads, Intel Xeon Processor E5-2690, 2.90 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 20 MB L3 cache per processor, 256 GB main memory; 1 virtual machine (VM) using 32 virtual CPUs. CPU utilization of VM1 (DB/Dia/Upd/Msg/Enq): 97%, Number of SAP SD benchmark users: 5,530; Average dialog response time:0.96 seconds;Throughput: Fully processed order line items/hour: 605,330; Dialog steps/hour:1,816,000;SAPS:30,270; Average database request time (dialog/update):0.021 sec / 0.045 sec; CPU utilization of central server:97%; Operating system, central server:Red Hat Enterprise Linux 6.4 on KVM; RDBMS: Sybase ASE 15.7; SAP enhancement package 5 for SAP ERP 6.0; Certification #2013007.

    (2) SAP SD 2-tier IBM Power 570, 2 processors /4 cores / 8 threads, POWER6, 4.7 GHz,Number of SAP SD benchmark users: 2035, users/core = 508.75; Average dialog response time: 1.99 seconds;Throughput:Fully Processed Order Line items/hour: 203,670;Dialog steps/hour: 611,000;SAPS: 10,180;Average DB request time (dia/upd): 0.011 sec / 0.015 sec;CPU utilization of central server: 99%;Operating System central server: AIX 5L Version 5.3;RDBMS: Oracle 10g;SAP ECC Release: 6.0;Certification #2007037.
    SAP SD 2-tier IBM Power 570, 2 processors /4 cores / 8 threads using 2 virtual cpus, POWER6, 4.7GHz; Number of SAP SD benchmark users: 1020, users/core=510;Average dialog response time: 1.99 seconds;Throughput:Fully Processed Order Line items/hour: 102,000;Dialog steps/hour: 306,000;SAPS: 5,100;Average DB request time (dia/upd): 0.005 sec / 0.009 sec;CPU utilization of central server: 50%;CPU utilization inside virtual machine: 99%;Operating System central server: AIX 6.1 on IBM Power VM (using 2 virtual CPUs);RDBMS: DB2 9.5;SAP ECC Release: 6.0; Certification #2008080

    http://www.sap.com. Results current as of 3/20/13.

    The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

    SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Other names may be trademarks of their respective owners.

    technorati tags: , , ,,,,,,,,,,,

    Written by benchmarkingblog

    March 20, 2013 at 5:01 pm

    Posted in Cisco, SAP

    Tagged with , ,