Elisabeth Stahl on Benchmarking and IT Optimization

Archive for the ‘Performance’ Category

When You Wish Upon a Star – A New Performance Website !

leave a comment »

When I was in college, one of my favorite places to go (to get away from my theoretical math classes) was the city art museum. I would leave my dorm really early in the morning — it must have actually been about 11AM — walk over the bridge, through many deserted city blocks and then over to the museum, grabbing a hot pretzel along the way. The journey seemed to take forever, much of it through an urban wasteland.

One of my dreams during this time involved a shortcut of sorts. In this dream, I would leave campus, cross the bridge and immediately take a short stroll along the river over to the museum. And it certainly was a dream – as the area where this supposed trail would be consisted of old train tracks, trash, brush, debris, waste, refuse, rubbish, litter, scrap, flotsam and jetsam, rubble and detritus.

Many years later, I returned to my alma mater. And found myself crossing the bridge once again to go to my favorite museum. But this time as I approached the end of the bridge I suddenly saw a stairway the went down to the river. And I took it. And lo and behold, there it was — the path of my dreams, a bike and hike paved trail that followed along the river, directly to my beloved art museum.

These amazing things don’t happen very often but when they do you can’t really believe it. And that’s what I’ve been thinking about a new awesome website from IBM.

For many years, as some of you know, I worked on industry standard benchmarks, writing articles about IBM and other results and highlighting comparisons. First it was in a newsletter, then an internal company website, then this blog — but I always dreamed that there would be a one-stop shop for everyone to go to see all the new and exciting benchmark results. And now we have it.

You can now find detailed IBM Power Systems performance data proof points. This brand new website contains test results for several different IT area workloads:

• Big Data and Analytics:  Showing faster time-to-value of big data. Discover how hadoop innovation can deliver faster, more affordable business insights
• Technical Computing:  See how IBM Power Systems solutions deliver faster time to insight and offer accelerated performance for demanding HPC workloads
• Cloud:  Find how to run swiftly and smoothly on a high performing global infrastructure. Deploy global cloud infrastructures rapidly on virtual servers
• Virtualization:  See how new virtualization technology makes deploying applications easier
• Online Transactional Processing and Enterprise Resource Planning:  IBM and SAP industry-leading performance across multiple workloads.


Sometimes dreams do come true.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,



Written by benchmarkingblog

August 3, 2016 at 4:36 pm

Posted in Performance

Tagged with

PureSystems, Bigger (and more Powerful) than a Bread Box

with 2 comments

IBM PureSystems combine the flexibility of a general purpose system, the elasticity of cloud and the simplicity of an appliance tuned to the workload.

Recently I’ve been hearing something that I find odd — Because of the simplicity, flexibility, and integrated design of PureSystems, I’ve heard the “blade” word mentioned. I mean nodes or IT Elements(ITEs) are involved here. Does this mean that the form factor appearance equates in any way to performance? How big exactly is one of these things? And I don’t mean floor space.

Using the Power Systems Performance Report, for instance, let’s take a look at the rPerf numbers.

As we know, rPerf is an estimate of commercial processing performance relative to other IBM UNIX systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. The model simulates some of the system operations such as CPU, cache and memory.

An IBM PureSystems 32-core p460 at 3.55 GHz is 331.1. What’s your guess on what that system can be compared with? The answer is: a Power 750 at 331.06. What about a 16-core p260? The rPerf is 176.6 compared with 176.57 for a 16-core Power 750. No small potatoes.

The point here is that these systems do have the advantages of blades and appliances. They have superb systems management capabilities. But also a whole lot more — including powerful performance.

It may look a little like an appliance. It may smell somewhat like a blade. It may taste like bread. But it’s also amazing in the ability to leap tall buildings in a single bound.


Sources:, Results current as of 6/14/12.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).
SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).
The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,,,



Written by benchmarkingblog

June 14, 2012 at 10:24 am

Avoid “Jump the Gun” Benchmark Tests

leave a comment »

I was talking with someone the other day and noticed something funny. They were chomping at the bit to do some deep down Java tuning. Let’s make ten changes at once and blow this thing out of the water tuning. What they didn’t yet have was a clue on where they were going and how they would even know if they got there.

Before starting any systems performance testing or benchmarking, here are some of my best practices:

  • First things first, define your benchmark objectives. You need success metrics so you know that you have succeeded. They can be response times, they can be transaction rates, they can be users, they can be anything — as long as they are something.
  • Document your hardware/software architecture. Include device names and specifications for systems, network, storage, applications.
  • Implement just one change variable at a time. (OK, sometimes we can get away with a couple.)
  • Keep a change log — what tests were run, what changes were made, what the results were, what your conclusions were for that specific test.
  • Map your tests to what performance reports you based your conclusions on. Sometimes using codes or special syntax when you name your reports helps.
  • Keep going, don’t give up, you will get there.

Some of this we learned in science class. Some of this is common sense. But you’d be surprised sometimes by how much sense these days is uncommon.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,


Written by benchmarkingblog

April 13, 2012 at 11:53 am

Posted in Performance, Uncategorized

Tagged with ,

The Performance Estimate Low Down

with 3 comments

I had avoided it for about as long as I could but I started working on my taxes over the weekend. I thought I might calculate my tax rate but then decided against it. Much too depressing.

There’s been a lot in the news lately on the average tax rate. What is fair, what is not, how to fix it all. Should investment income be taxed at the same rate as your salary? Should Warren Buffett pay the same tax rate as Debbie, his secretary? And does looking at the average of the two make any sense at all?

This discussion reminded me about questions I’ve been getting lately on estimating performance of IT systems.

Systems performance estimates that compare one system to another have sprouted up everywhere. And it has recently come to my attention that many of us have been placing divine reliance on these “performance estimates.” We love to quote them, we use them in many of our capacity and TCO tools, and we may even make huge purchase decisions based on them.

What we need to realize is that sometimes these estimates are based on ridiculously inane models that basically average an OLTP benchmark here and an ISV benchmark there and an HPC benchmark from somewhere else and then throw in something with Java to try to come up with an overall value for a system. Without taking any of the crucial aspects of the technology into consideration. Makes sense, right?

And guess what? Sometimes when there is no input for a certain benchmark in a model, the creator of the performance estimate makes something up. Or even worse, allows the vendor of the system in question to make something up. So if a vendor has published very few benchmarks, most of the performance estimate could be whipped cream.

Almost anything is better than this. So run and measure your workloads for real. Or use a published industry standard or ISV benchmark that matches your workload. Here’s what I’m thinking — it’s imperative to make sure that you understand exactly what is behind every performance estimate that you use. And only ever use them as a last resort.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,

Written by benchmarkingblog

February 23, 2012 at 2:49 pm

Posted in Performance

Tagged with ,

Dear Performance Advisor Sergio in Austin

leave a comment »

Dear Elisabeth:

I had an experience that I would like to share with your readers.

The other day I had a check engine light show up in my car. To anyone else, this might be a non-issue, I always dread those lights. We all have our limitations, and mine is the inability to resolve any car problem besides an empty fuel tank.

The good news is that after taking my car to the mechanic, the only issue was a loose gas cap that was quickly resolved without charge.

The bad news was that it took a day of inconvenience to find out something that would have been simple to resolve if I had a mechanic as a neighbor.

Although I don’t know anything about working with cars, I do happen to work with a group of IBM experts in Power system performance.

They have recently put together a set of advisors that will monitor current running performance of a live Power system with low overhead.  After monitoring, the advisors will provide a clear understanding of how the system is performing, and provide some expert advice on first places to look for improvements.

Essentially, we have found a way to move a whole team of experts into everyone’s data center.

There are currently 3 advisors that can be downloaded for free from the IBM developerWorks website:

Sergio in Austin

Dear Sergio in Austin:
Thanks so much for your letter. So much better than the column last week entitled “Bride wants to keep friend’s lecherous husband off guest list.” (Yes, this is a real one.) Very exciting news about these wonderful performance tools.  Readers, if you have any questions about them, feel free to send a letter to  Enjoy.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,

Written by benchmarkingblog

February 10, 2012 at 5:55 pm

Mowing Oracle’s Performance Weeds

with one comment

We have a local town ordinance here where I live that fines you if your grass grows above 6 inches. Lately, I’ve become really annoyed with this legacy regulation.

The higher your grass, the deeper its roots will be. Deeper roots allow your lawn to stay green with less water, and fend off insect attacks without dying off. Higher grass filters rainwater and prevents soil erosion. Tall grass actually creates a canopy of shade which shades out weeds. If you mow your lawn higher, you won’t have to cut it as often.

Some towns have increased their grass regulation heights to 8 inches. Some to 9 inches. I’ve seen some progressive towns that have even gone to 12 inches. I read one article that says Omaha, Nebraska allows 18 inches. Now at 18 inches you might just need to wear your hiking boots to go and get the paper.

But my point here is that 6 inches is a number. And just a number. It is not consistent, it isn’t right for every situation, and it is certainly not backed by any real data.

Which reminds me of some of the press releases and presentations I’ve seen from Oracle lately. No matter what the product, feature, or application, the improved performance is always claimed as 10x. It can be query performance, storage performance, response times, you name it. 10x. Have we ever seen the data behind the 10x? Is there a white paper or a benchmark where we can see the 10x? Is there a footnote that describes the 10x? And remember, it’s not 9.7x or 9.8x or even 10.1x. It’s always 10x.

I’ve thought about spending my days working to get signatures to increase my town’s 6 inch grass height ordinance. But it’s more fun to analyze Oracle data. If there was any. Oh, I’ve got to go now to mow my lawn.


The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , , , ,,,,,,,,,

Written by benchmarkingblog

July 19, 2011 at 2:50 pm

Posted in Exadata, Oracle, Performance

Tagged with ,

Performance and Capacity Implications for a Smarter Planet

leave a comment »

IBM uses the phrase “Smarter Planet” to describe its vision for the future, in which the world’s systems become increasingly instrumented, interconnected and infused with intelligence in order to better manage the environments in which we work, play and live.

Real examples of current and emerging technologies are starting to enable “smarter” solutions for vital domains such as energy, transportation, food production, conservation, finance and healthcare. For such an ambitious vision to become reality, there are many technical challenges to address, among them being how to manage performance and capacity for the supporting IT systems.

This new Redpaper discusses performance and capacity implications of these solutions. It examines the Smarter Planet vision, the characteristics of these solutions (including those that make performance and capacity management particularly challenging), examples of addressing performance and capacity in a number of recent Smarter IT projects, recommendations from what has been learned thus far, and discussions of what the future may hold for these solutions.

Written by benchmarkingblog

June 13, 2011 at 3:42 pm

Posted in Performance, Smarter

%d bloggers like this: