Elisabeth Stahl on Benchmarking and IT Optimization

Oracle Meets That ’70s Show

leave a comment »

Last week I made the annual spring break pilgrimage to my childhood home in the shadows of the cherry blossoms.

What always strikes me when I visit — and you’ve probably had the same experience — is how nothing, almost nothing, has changed since I lived there four decades ago. Yes, there’s a huge TV with cable now. And a cell phone, though not so smart yet. And an iPad that always needs something done to it. But other than these few new features, the general layout and beauty of the interior is essentially the same.

Which I love. Why get new kitchen cabinets when you can take the beautiful solid wood ones and have them refinished? Why buy new cheap chairs when 50’s Danish Modern is built so well and gorgeous to boot?

But one of the best examples of this retro environment, hands down, has to be the downstairs bathroom. When entering you are transported to the time of Nixon and Sonny and Cher. The colors are tremendous – bright bright yellows and oranges. Big plaid wallpaper. And wicker accessories. A 70’s dream of a bathroom. And you know what — it still looks great. The glamour of everything from the 70’s has returned in full force in this one tiny room.

But some things are not meant to come back. And that includes the way some vendors compare systems and benchmarks.

I recently saw a comparison from Oracle comparing the SPARC T7-1 vs. the IBM Power System S824. It brought me right back to when I started blogging almost ten years ago, when we were all inundated with benchmark flaws. Let’s take a look at some of the details :

  • The tool Oracle used to compare the systems is NOT an industry standard benchmark audited by a third party. It is a tool that can be used by anyone. Oracle ran all tests themselves.
  • The tool used is adapted from the TPC-C benchmark, which Oracle themselves has stated in the past that they feel is dated.
  • The disks used in the systems compared are not the same – HDD vs. SAS.
  • The logs and database files for the IBM test were not run on the IBM system – they were run on a different Oracle system.
  • Solaris 11.3 was used for the logs and database file systems on the Oracle side; Solaris 11.2 was used for the IBM configuration.


A photo of my childhood downstairs bathroom was Instagrammed recently. It received 35 likes, over half of them from students at the best design school in the country. That makes sense. Oracle’s benchmark comparisons don’t.



Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,


Written by benchmarkingblog

March 23, 2016 at 10:07 am

Posted in Oracle, POWER8, SPARC

Tagged with , ,

Embracing the Cognitive World Every Day with IBM Systems

leave a comment »

Read the full article here

OK, so it was time. I didn’t have an excuse anymore. That I had a report due at work, or that the holidays were coming, or that I had to go to the dentist.

It was finally time to do something that I had avoided for almost a year. Something that was even worse, if you can believe it, than preparing my taxes. It was time to make the dreaded updates to my insurance policy.

These were not the sort of quick changes that I could easily do online or rapidly with a phone call. These were excruciatingly detailed updates to all of my policies–home, auto and personal. They came with multiple liabilities, multiple schedules and multiple riders. I would need to block out many hours of the day for this one. And suffer the nightmare involved in these complex negotiations with my insurance agent to hedge against the risk of an ugly, contingent, uncertain loss.

As it turned out, my foray into policy updates ended up taking weeks, not hours. The complexity of the millions of arcane rules around these types of policies is mind-boggling. Are you living in the state of Ohio with 3 1/2 baths? How many boats do you own? Gosh forbid you have any male teenage drivers. Or a dog.

But this very situation is actually a typical everyday situation where cognitive computing can really shine . . .


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,


Written by benchmarkingblog

March 2, 2016 at 11:06 am

Posted in Cognitive, Watson

Tagged with ,

#CMG2015: Performance Paradise

leave a comment »

Do you work in Systems Performance, IT Benchmarking, or Capacity Planning ? Then (if you are not already) you definitely need to be a part of Computer Measurement Group !

Computer Measurement Group (CMG) is a not-for-profit, worldwide organization of IT professionals committed to sharing information and best practices focused on ensuring the efficiency and scalability of IT service delivery to the enterprise through measurement, quantitative analysis and forecasting.

For decades CMG has been a leading organization for the exchange of information among enterprise computing professionals. Anyone charged with the measurement and management of computer systems would benefit from membership in CMG.

We recently held our annual international technical conference in San Antonio, home of the Alamo and the amazing River Walk.

Where else can you enjoy multiple days learning from and sharing with a few hundred of the best performance and capacity people in the world ? !!!

We had a great mix of topics at this conference (Full Disclosure: I am the Program Chair of this conference) across many focus areas including Performance Engineering, Application Performance Management, Mobile and Web Performance, Mainframe Performance and Capacity Planning, Network Capacity and Performance, Storage, and much more.

Here’s just a small sample of some of the awesome presentations:

  • I Feel the Need for Speed
  • Managing the Datacenter as the Computer
  • Tackling Big Data
  • Performance Considerations for Public Cloud
  • Why is this Web App Running Slowly?

I was a speaker on two exciting panels. The first was on Hybrid Cloud. I discussed how the Fit for Purpose methodology can work when deciding on the right environmental mix of on-premise, off-premise, private, public cloud. Namely, the Best Execution Venue. The second panel was on advancing your career in the Performance area, where I had a few good stories to tell.

The key to this conference and to this group as a whole is the laser-like focus on all of the groundbreaking, state-of-the-art areas in IT — but with an extreme emphasis on how they relate to Performance and Capacity.

So we talked Cloud. But Cloud with Performance. We talked Analytics. But Analytics with Performance. We talked Testing. But Testing with Performance . . .

All of the learning is wonderful. But I would have to say, as we tend to say about all events, that the networking and sharing is the absolute best part. What a Wonderful World of a conference.

And CMG is not just an annual conference. It’s an organization that you can be part of year-round through webinars, papers, articles, journals, regional groups and even social media.

Working on this conference was like planning a wedding. They can only go off without a hitch with an outstanding team to make it happen. Now the honeymoon begins. Until next month when we start it all over again for #CMG2016 !


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,




Written by benchmarkingblog

November 10, 2015 at 3:23 pm

Posted in CMG

Tagged with

Back in Time with Oracle

with one comment

Some of you may know that this week was a very big one for “Back to the Future” movie fans. On Wednesday, Oct. 21, 2015, at 4:29 p.m., our today caught up to the tomorrow depicted in “Back to the Future, Part II.” In that 1989 film, a DeLorean time machine appears from 30 years in the past.

To those who love time travel, this is a really big deal. Some towns even went so far as to rename themselves to the featured city in the film. Ceremonies worldwide were performed at exactly 4:29PM.

And this reminded me of a benchmark result that was just published today by Oracle on the SAP SD benchmark.

As we move into newer digital workloads, some of the older industry benchmarks have gone by the wayside. Many of us have spent a lot of time analyzing these newer workloads and developing new metrics for them. But one classic benchmark is still extremely appropriate for many of today’s applications – and that is the suite of SAP benchmarks.

But this new Oracle result just published is clearly dated — even though it is a brand new result on a brand new Oracle SPARC system. The IBM Power Systems result with DB2 from over 1 year ago is over 2X better performance per core than this new Oracle SPARC result. (1)

What’s really exciting, unlike this new benchmark result, is that many of the predictions of the future in the “Back to the Future” movie were right on. But I am still waiting for the dog-walking drone.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

(1)IBM Power Enterprise System E870 on the two-tier SAP SD standard application benchmark running SAP enhancement package 5 for the SAP ERP 6.0 application; 8 processors / 80 cores / 640 threads, POWER8; 4.19GHz, 2048 GB memory, 79,750 SD benchmark users, running AIX® 7.1 and DB2® 10.5, dialog response: 0.97 seconds, order line items/hour: 8,722,000, dialog steps/hour: 26,166,000, SAPS: 436,100, Database response time (dialog/update): 0.013 sec / 0.026 sec, CPU utilization: 99%, Cert #2014034 vs. Oracle SPARC T7-2 result of 30,800 users, Average dialog response time: 0.96 seconds, Fully processed order line items/hour: 3,372,000, Dialog steps/hour: 10,116,000, SAPS: 168,600, Average database request time (dialog/update):0.022 sec / 0.047 sec, CPU utilization of central server:98%, Operating system, central server: Solaris 11, RDBMS: Oracle 12c, SAP Business Suite software:SAP enhancement package 5 for SAP ERP 6.0, Certification number: #2015050, SPARC T7-2, 2 processors / 64 cores / 512 threads,SPARC M7 4.133 GHz, 16 KB (D) and 16 KB (I) L1 cache per core, 256 KB (D) L2 cache per 2 cores and 256KB (I) per 4 cores, 64 MB L3 cache per processor, 1024 GB main memory

SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. All other product and service names mentioned are the trademarks of their respective companies.

technorati tags: , , ,,,,,,,,,,,

Written by benchmarkingblog

October 23, 2015 at 12:59 pm

Posted in Oracle, SAP

Tagged with , , , , , ,

Amazon, Don’t Be A Performance Amateur

leave a comment »

I read just this morning that La Guardia airport in New York, with its dilapidated terminals and long delays, will be at long last rebuilt by 2021.

The plans look promising and work has already started. With new taxiways, a train and a grand entryway, it will finally be something to be proud of. Major infrastructure certainly needed for one of the major big league cities in the world.

And to play in the big league, you need to have the right plans to study and analyze, and you need to know what you are talking about. Which is why I was so disappointed this morning to also read about some new performance claims from Amazon Web Services (AWS).

In an announcement of a new relational database offering, Amazon made claims that simply had me confused. Let’s take a look:

  • The claims mix up performance with price performance. Obviously this difference is pretty basic. And important — but especially important in this environment where AWS charges extra for database instances, storage, and I/O.
  • The claims mix up speed and throughput. This difference can be very important because in this environment there are only 3 AWS regions right now offering these services and network performance can be key.
  • The claims mix up general comparisons with other “existing solutions” with a comparison using one particular tool, SysBench, to one particular release of one particular database, MySQL 5.6.
  • The claims mix up whether any improvement is due to software or hardware while stating that special techniques were used on both. Need I say more.

To play in the big league majors you have the understand the complexities of the subject. By attempting to address performance of this new offering, AWS is clearly exhibiting minor stripes.

Have you ever been at that gate at La Guardia, I think it’s A1A, where you have to carry your suitcase down two flights of stairs to a small waiting room with no air?


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

Amazon Web Services and the “Powered by Amazon Web Services” logo are trademarks of, Inc. or its affiliates in the United States and/or other countries.

technorati tags: , , ,,,,,,,,,


Written by benchmarkingblog

July 28, 2015 at 11:41 am

Posted in Amazon, Cloud

Tagged with ,

What’s In Your Bag?

leave a comment »

With summer just beginning in this part of the world, vacations are on everyone’s mind. And for me, that means hiking.

I actually have a list of everything that should go into my hiking knapsack. It’s written on a scrap of an old envelope and was first used prior to my going up Mount Washington. Here’s what’s on it:

  1. The Electronics: compass, map, phone, headlamp
  2. The Emergency Food: trail mix and granola bars, extra water
  3. The Moleskin: for my big right toe
  4. The Defense: bear spray and pocket knife
  5. The Sweater: my old gray cashmere with the big holes
  6. The Support: my hiking poles
  7. Just In Case: bug net, bandages, extra wool socks, hat, gloves, rain pants and long underwear
  8. If I Get in Trouble: whistle and waterproof matches
  9. The Drug of Choice: Motrin — for my back
  10. May be needed at the end: After Bite and the hot tub

Without these, I’d be lost. Literally. Maybe even worse.

And I was reminded the other day that the same type of preparation I use for my hiking trips is imperative when preparing my laptop bag for a business meeting.

And I realized that in the end I bring pretty much the same stuff.

  1. The Electronics: chargers, pointers, batteries
  2. The Emergency Food: cereal bars and pretzels, in case they don’t feed you
  3. The Moleskin: calendar that is – to schedule the next meeting
  4. The Defense: quick wit and verbal barbs
  5. The Sweater: my nice black cashmere, for when the air conditioning blows
  6. The Support: list of other subject matter experts
  7. Just In Case: the cheat sheet with the latest POWER8 news, the titles of who will be at the meeting, and the fun-to-read magazine because you never know when you are going to have to wait
  8. If I Get in Trouble: AAA or American Express Travel
  9. The Drug of Choice: Motrin — for my head
  10. May be needed at the end: drink in the hot tub


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,

Written by benchmarkingblog

June 25, 2015 at 4:00 pm

Posted in Uncategorized

Will the Real Benchmark Please Stand Up

leave a comment »

They are at it once again. Those imposter benchmarks.

You know. The ones that initially look and feel like real IT industry performance benchmarks.

But then you read the article again, you look a bit more closely and you realize. They are at it again.

So how can we detect and overcome this benchmark fraud ?

  • Make sure the names of the actual benchmarks are clearly stated. You know, something with letters like TPC, SPEC, SAP, STAC, . . .
  • Make sure the metrics are correct. You know, something like transactions per minute or number of users.
  • Make sure there’s a really good footnote with all the details. Just the data is not enough.
  • Make sure there is a link to the site about the benchmark and preferably the results.
  • Make sure that if you sense an imposter benchmark, find REAL data on the systems you are interested in. At an official benchmark or vendor site.  Or run the real workload as a client benchmark.

If you’re not seeing these things, very likely it is some obscure testing that may or may not have a proper benchmark kit, audited results, etc. And it may very likely be artificially tuned to exploit only certain hardware or software that the imposter is looking to promote.

An industry benchmark masquerader that is actually a tried and true swindler.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,

Written by benchmarkingblog

June 4, 2015 at 6:31 pm

Posted in Uncategorized

Tagged with

%d bloggers like this: