benchmarkingblog

Elisabeth Stahl on Benchmarking and IT Optimization

Oracle Meets That ’70s Show

leave a comment »

Last week I made the annual spring break pilgrimage to my childhood home in the shadows of the cherry blossoms.

What always strikes me when I visit — and you’ve probably had the same experience — is how nothing, almost nothing, has changed since I lived there four decades ago. Yes, there’s a huge TV with cable now. And a cell phone, though not so smart yet. And an iPad that always needs something done to it. But other than these few new features, the general layout and beauty of the interior is essentially the same.

Which I love. Why get new kitchen cabinets when you can take the beautiful solid wood ones and have them refinished? Why buy new cheap chairs when 50’s Danish Modern is built so well and gorgeous to boot?

But one of the best examples of this retro environment, hands down, has to be the downstairs bathroom. When entering you are transported to the time of Nixon and Sonny and Cher. The colors are tremendous – bright bright yellows and oranges. Big plaid wallpaper. And wicker accessories. A 70’s dream of a bathroom. And you know what — it still looks great. The glamour of everything from the 70’s has returned in full force in this one tiny room.

But some things are not meant to come back. And that includes the way some vendors compare systems and benchmarks.

I recently saw a comparison from Oracle comparing the SPARC T7-1 vs. the IBM Power System S824. It brought me right back to when I started blogging almost ten years ago, when we were all inundated with benchmark flaws. Let’s take a look at some of the details :

  • The tool Oracle used to compare the systems is NOT an industry standard benchmark audited by a third party. It is a tool that can be used by anyone. Oracle ran all tests themselves.
  • The tool used is adapted from the TPC-C benchmark, which Oracle themselves has stated in the past that they feel is dated.
  • The disks used in the systems compared are not the same – HDD vs. SAS.
  • The logs and database files for the IBM test were not run on the IBM system – they were run on a different Oracle system.
  • Solaris 11.3 was used for the logs and database file systems on the Oracle side; Solaris 11.2 was used for the IBM configuration.

 

A photo of my childhood downstairs bathroom was Instagrammed recently. It received 35 likes, over half of them from students at the best design school in the country. That makes sense. Oracle’s benchmark comparisons don’t.

 

************************************************

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,

Written by benchmarkingblog

March 23, 2016 at 10:07 am

Posted in Oracle, POWER8, SPARC

Tagged with , ,

Embracing the Cognitive World Every Day with IBM Systems

leave a comment »

Read the full article here

OK, so it was time. I didn’t have an excuse anymore. That I had a report due at work, or that the holidays were coming, or that I had to go to the dentist.

It was finally time to do something that I had avoided for almost a year. Something that was even worse, if you can believe it, than preparing my taxes. It was time to make the dreaded updates to my insurance policy.

These were not the sort of quick changes that I could easily do online or rapidly with a phone call. These were excruciatingly detailed updates to all of my policies–home, auto and personal. They came with multiple liabilities, multiple schedules and multiple riders. I would need to block out many hours of the day for this one. And suffer the nightmare involved in these complex negotiations with my insurance agent to hedge against the risk of an ugly, contingent, uncertain loss.

As it turned out, my foray into policy updates ended up taking weeks, not hours. The complexity of the millions of arcane rules around these types of policies is mind-boggling. Are you living in the state of Ohio with 3 1/2 baths? How many boats do you own? Gosh forbid you have any male teenage drivers. Or a dog.

But this very situation is actually a typical everyday situation where cognitive computing can really shine . . .

hqdefault
************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,

,,,,,

Written by benchmarkingblog

March 2, 2016 at 11:06 am

Posted in Cognitive, Watson

Tagged with ,

#CMG2015: Performance Paradise

leave a comment »

Do you work in Systems Performance, IT Benchmarking, or Capacity Planning ? Then (if you are not already) you definitely need to be a part of Computer Measurement Group !

Computer Measurement Group (CMG) is a not-for-profit, worldwide organization of IT professionals committed to sharing information and best practices focused on ensuring the efficiency and scalability of IT service delivery to the enterprise through measurement, quantitative analysis and forecasting.

For decades CMG has been a leading organization for the exchange of information among enterprise computing professionals. Anyone charged with the measurement and management of computer systems would benefit from membership in CMG.

We recently held our annual international technical conference in San Antonio, home of the Alamo and the amazing River Walk.

Where else can you enjoy multiple days learning from and sharing with a few hundred of the best performance and capacity people in the world ? !!!

We had a great mix of topics at this conference (Full Disclosure: I am the Program Chair of this conference) across many focus areas including Performance Engineering, Application Performance Management, Mobile and Web Performance, Mainframe Performance and Capacity Planning, Network Capacity and Performance, Storage, and much more.

Here’s just a small sample of some of the awesome presentations:

  • I Feel the Need for Speed
  • Managing the Datacenter as the Computer
  • Tackling Big Data
  • Performance Considerations for Public Cloud
  • Why is this Web App Running Slowly?

I was a speaker on two exciting panels. The first was on Hybrid Cloud. I discussed how the Fit for Purpose methodology can work when deciding on the right environmental mix of on-premise, off-premise, private, public cloud. Namely, the Best Execution Venue. The second panel was on advancing your career in the Performance area, where I had a few good stories to tell.

The key to this conference and to this group as a whole is the laser-like focus on all of the groundbreaking, state-of-the-art areas in IT — but with an extreme emphasis on how they relate to Performance and Capacity.

So we talked Cloud. But Cloud with Performance. We talked Analytics. But Analytics with Performance. We talked Testing. But Testing with Performance . . .

All of the learning is wonderful. But I would have to say, as we tend to say about all events, that the networking and sharing is the absolute best part. What a Wonderful World of a conference.

And CMG is not just an annual conference. It’s an organization that you can be part of year-round through webinars, papers, articles, journals, regional groups and even social media.

Working on this conference was like planning a wedding. They can only go off without a hitch with an outstanding team to make it happen. Now the honeymoon begins. Until next month when we start it all over again for #CMG2016 !

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,

,,,

,,,,

,

Written by benchmarkingblog

November 10, 2015 at 3:23 pm

Posted in CMG

Tagged with

Back in Time with Oracle

with one comment

Some of you may know that this week was a very big one for “Back to the Future” movie fans. On Wednesday, Oct. 21, 2015, at 4:29 p.m., our today caught up to the tomorrow depicted in “Back to the Future, Part II.” In that 1989 film, a DeLorean time machine appears from 30 years in the past.

To those who love time travel, this is a really big deal. Some towns even went so far as to rename themselves to the featured city in the film. Ceremonies worldwide were performed at exactly 4:29PM.

And this reminded me of a benchmark result that was just published today by Oracle on the SAP SD benchmark.

As we move into newer digital workloads, some of the older industry benchmarks have gone by the wayside. Many of us have spent a lot of time analyzing these newer workloads and developing new metrics for them. But one classic benchmark is still extremely appropriate for many of today’s applications – and that is the suite of SAP benchmarks.

But this new Oracle result just published is clearly dated — even though it is a brand new result on a brand new Oracle SPARC system. The IBM Power Systems result with DB2 from over 1 year ago is over 2X better performance per core than this new Oracle SPARC result. (1)

What’s really exciting, unlike this new benchmark result, is that many of the predictions of the future in the “Back to the Future” movie were right on. But I am still waiting for the dog-walking drone.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

(1)IBM Power Enterprise System E870 on the two-tier SAP SD standard application benchmark running SAP enhancement package 5 for the SAP ERP 6.0 application; 8 processors / 80 cores / 640 threads, POWER8; 4.19GHz, 2048 GB memory, 79,750 SD benchmark users, running AIX® 7.1 and DB2® 10.5, dialog response: 0.97 seconds, order line items/hour: 8,722,000, dialog steps/hour: 26,166,000, SAPS: 436,100, Database response time (dialog/update): 0.013 sec / 0.026 sec, CPU utilization: 99%, Cert #2014034 vs. Oracle SPARC T7-2 result of 30,800 users, Average dialog response time: 0.96 seconds, Fully processed order line items/hour: 3,372,000, Dialog steps/hour: 10,116,000, SAPS: 168,600, Average database request time (dialog/update):0.022 sec / 0.047 sec, CPU utilization of central server:98%, Operating system, central server: Solaris 11, RDBMS: Oracle 12c, SAP Business Suite software:SAP enhancement package 5 for SAP ERP 6.0, Certification number: #2015050, SPARC T7-2, 2 processors / 64 cores / 512 threads,SPARC M7 4.133 GHz, 16 KB (D) and 16 KB (I) L1 cache per core, 256 KB (D) L2 cache per 2 cores and 256KB (I) per 4 cores, 64 MB L3 cache per processor, 1024 GB main memory

SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. All other product and service names mentioned are the trademarks of their respective companies.

technorati tags: , , ,,,,,,,,,,,

Written by benchmarkingblog

October 23, 2015 at 12:59 pm

Posted in Oracle, SAP

Tagged with , , , , , ,

Amazon, Don’t Be A Performance Amateur

leave a comment »

I read just this morning that La Guardia airport in New York, with its dilapidated terminals and long delays, will be at long last rebuilt by 2021.

The plans look promising and work has already started. With new taxiways, a train and a grand entryway, it will finally be something to be proud of. Major infrastructure certainly needed for one of the major big league cities in the world.

And to play in the big league, you need to have the right plans to study and analyze, and you need to know what you are talking about. Which is why I was so disappointed this morning to also read about some new performance claims from Amazon Web Services (AWS).

In an announcement of a new relational database offering, Amazon made claims that simply had me confused. Let’s take a look:

  • The claims mix up performance with price performance. Obviously this difference is pretty basic. And important — but especially important in this environment where AWS charges extra for database instances, storage, and I/O.
  • The claims mix up speed and throughput. This difference can be very important because in this environment there are only 3 AWS regions right now offering these services and network performance can be key.
  • The claims mix up general comparisons with other “existing solutions” with a comparison using one particular tool, SysBench, to one particular release of one particular database, MySQL 5.6.
  • The claims mix up whether any improvement is due to software or hardware while stating that special techniques were used on both. Need I say more.

To play in the big league majors you have the understand the complexities of the subject. By attempting to address performance of this new offering, AWS is clearly exhibiting minor stripes.

Have you ever been at that gate at La Guardia, I think it’s A1A, where you have to carry your suitcase down two flights of stairs to a small waiting room with no air?

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

Amazon Web Services and the “Powered by Amazon Web Services” logo are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.

technorati tags: , , ,,,,,,,,,

,

Written by benchmarkingblog

July 28, 2015 at 11:41 am

Posted in Amazon, Cloud

Tagged with ,

What’s In Your Bag?

leave a comment »

With summer just beginning in this part of the world, vacations are on everyone’s mind. And for me, that means hiking.

I actually have a list of everything that should go into my hiking knapsack. It’s written on a scrap of an old envelope and was first used prior to my going up Mount Washington. Here’s what’s on it:

  1. The Electronics: compass, map, phone, headlamp
  2. The Emergency Food: trail mix and granola bars, extra water
  3. The Moleskin: for my big right toe
  4. The Defense: bear spray and pocket knife
  5. The Sweater: my old gray cashmere with the big holes
  6. The Support: my hiking poles
  7. Just In Case: bug net, bandages, extra wool socks, hat, gloves, rain pants and long underwear
  8. If I Get in Trouble: whistle and waterproof matches
  9. The Drug of Choice: Motrin — for my back
  10. May be needed at the end: After Bite and the hot tub

Without these, I’d be lost. Literally. Maybe even worse.

And I was reminded the other day that the same type of preparation I use for my hiking trips is imperative when preparing my laptop bag for a business meeting.

And I realized that in the end I bring pretty much the same stuff.

  1. The Electronics: chargers, pointers, batteries
  2. The Emergency Food: cereal bars and pretzels, in case they don’t feed you
  3. The Moleskin: calendar that is – to schedule the next meeting
  4. The Defense: quick wit and verbal barbs
  5. The Sweater: my nice black cashmere, for when the air conditioning blows
  6. The Support: list of other subject matter experts
  7. Just In Case: the cheat sheet with the latest POWER8 news, the titles of who will be at the meeting, and the fun-to-read magazine because you never know when you are going to have to wait
  8. If I Get in Trouble: AAA or American Express Travel
  9. The Drug of Choice: Motrin — for my head
  10. May be needed at the end: drink in the hot tub

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,

Written by benchmarkingblog

June 25, 2015 at 4:00 pm

Posted in Uncategorized

Will the Real Benchmark Please Stand Up

leave a comment »

They are at it once again. Those imposter benchmarks.

You know. The ones that initially look and feel like real IT industry performance benchmarks.

But then you read the article again, you look a bit more closely and you realize. They are at it again.

So how can we detect and overcome this benchmark fraud ?

  • Make sure the names of the actual benchmarks are clearly stated. You know, something with letters like TPC, SPEC, SAP, STAC, . . .
  • Make sure the metrics are correct. You know, something like transactions per minute or number of users.
  • Make sure there’s a really good footnote with all the details. Just the data is not enough.
  • Make sure there is a link to the site about the benchmark and preferably the results.
  • Make sure that if you sense an imposter benchmark, find REAL data on the systems you are interested in. At an official benchmark or vendor site.  Or run the real workload as a client benchmark.

If you’re not seeing these things, very likely it is some obscure testing that may or may not have a proper benchmark kit, audited results, etc. And it may very likely be artificially tuned to exploit only certain hardware or software that the imposter is looking to promote.

An industry benchmark masquerader that is actually a tried and true swindler.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,

Written by benchmarkingblog

June 4, 2015 at 6:31 pm

Posted in Uncategorized

Tagged with

World Peace . . . and Cloud

leave a comment »

I was reading an article in the paper this morning on reactions to the riots and looting in the city of Baltimore. And one point struck me – citizens of nearby neighborhoods seemed at a loss for what they could do to help their poverty-stricken neighbors. The comments the journalist kept hearing over and over again were essentially — Yes, but what can I do ?

Sometimes it’s hard to go from thinking about big problems strategically to tactical methods with concrete steps.

And that reminded me of something I’ve been seeing in IT lately, specifically in the area of transforming a system to a cloud.

A lot of us get hung up on the long term strategic big ideas, dreams, and wishes for our infrastructure. In 10 years I want to be able to . . . I envision a world where our data . . . In the future I will connect this system to . . . I will manage and control and orchestrate — someday.

In reality, we can get started on our dreams with 3 simple steps:

  • 1. Understand the Roadmaps for hardware and software on your current system and any new products being considered – What is supported now and in the future?
  • 2. Determine which hardware and software in your environment is appropriate to contain in your private Cloud.
  • 3. Create your private Cloud management system to MANAGE your infrastructure by either employing a tool such as IBM Power Virtualization Center (PowerVC) or creating an OpenStack tool of your own. Use this system to manage your compute (creating LPARs…), storage (managing SAN…), and network (allocating LUNs…) infrastructure. Start right now.

Now you have your private cloud and can consider some advanced steps:

  • Adopt a Cloud CONTROL and project management system such as IBM Cloud Manager with OpenStack for a self-service portal to create accounts and assets with a single pane of glass.
  • Consider implementing advanced ORCHESTRATION with a tool such as IBM Cloud Orchestrator to provide the capability of facilitating more complex workloads necessary to deploy reusable pattern solutions and take advantage of libraries associated with deploying more advanced cloud capabilities.

Then you are on your way with a sophisticated private cloud environment. Connect these systems of record and insight to your systems of engagement, potentially in the public cloud space, and you now have a full Hybrid environment.

As for Baltimore, it’s amazing to me that people seem to think they need to do everything or do nothing. Every little thing can help. Work with a student who can’t read. Donate some time or assets to a non-profit. It doesn’t have to be that hard.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,,,,,,,

Written by benchmarkingblog

May 5, 2015 at 12:10 pm

Posted in Cloud

Tagged with

My Washing Machine and 5 Myths on Cloud

leave a comment »

It was finally time. My clothes washer had been shaking for months. I actually resorted to running everything on Delicate. Tried to fix it. Put rubber pads under it. And then it finally danced off across the floor. And stopped.

Like many appliances, when you look at new washers there are so many things to compare. Front loader vs. top loader? Regular vs. High Efficiency? Agitator or not?

Of course I went to social media to get the 411. And what I found is that many consumers love a certain brand or model. Or hate it. And there is altogether too much customer sentiment on minor features that you may not even care about. Like lid locks.

Lid locks keep you from opening the washer easily to add more clothes. Something I never do. And something I really don’t care about. But it seems that some people really care an awful lot about lid locks. The point is – should I make my decision based solely on this narrow view?

And that’s what I have been seeing lately on Cloud. Like taking a narrow view of infrastructure platforms. Like limiting cloud scope to virtual public cloud only. Like forgetting that Cloud should be tactical and strategic, where performance, security, and compliance are key.

Here are 5 myths I’ve seen:

  • Cloud means only infrastructure. FALSE. Don’t forget software and business applications via the Cloud, a whole Marketplace.
  • Cloud means only public cloud. FALSE.  As we know, public cloud is a great enabler but on-premise private platforms are imperative for critical business systems of record.
  • Cloud means x86. Or AWS. Or Azure. Or . . . FALSE. Higher-end systems such as Power Systems and System z are of course leaders in private on-premise cloud. Power is also an outstanding choice on the public side, via SoftLayer or Cloud Managed Services.
  • Cloud means cheap public virtualized cloud. FALSE. Do the math. Some public cloud options can initially look inexpensive – but watch the hidden costs. Check out what it costs to actually access or move data. You may be surprised by the TCO you calculate.
  • Cloud means good for everything. FALSE. Match your workloads to the best technologies. Public cloud is not right for everything. Private cloud is not right for everything. Cloud is not even right for everything.

Make sure to focus on the full scope of cloud infrastructure platforms, the numerous choices offered, and the full suite of IBM’s cloud portfolio, on-premise and off-premise.

In the end, my new washing machine had a lid lock. But only because it happened to come with one.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,,,,,,,,

Written by benchmarkingblog

April 2, 2015 at 4:19 pm

Posted in Cloud

Tagged with

And the Oscar Goes to . . . IBM

with 3 comments

It seems as if just a few years ago I actually used to get excited about the big night. The big night that was last night, the 87th Academy Awards. I used to watch with friends, with family. I even went to an awards party once.

In the past I even saw the movies that won — before they won.

And I think that’s how some of us have been feeling lately about industry standard performance benchmarks. Remember the good old days of leapfrogging? Of vicious ads and blogs? Of fights over TPC-C?

But recently I was super impressed with a brand new IBM publish last week of the SPECjEnterprise2010 benchmark. SPECjEnterprise2010 emulates an automobile dealership, manufacturing, supply chain management and order/inventory system and was designed to stress the Java EE application server. It’s an excellent measure of middleware.

The new IBM result running WebSphere and DB2 was the best Intel “Haswell” EP result, over 31% greater per core than the just published Oracle result with WebLogic.(1) And of course, the flagship IBM POWER8 result is the #1 per core result in the industry, over 79% greater per core than the new Oracle x86 result.(2)

At least I still have the passion for benchmarks. Because nowadays my interest in the Academy Awards pretty much revolves around a few dresses on the red carpet. And I can get that not by staying up all night but with a couple of photos online the next morning.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

(1) 36-core(2×18-core Processor) Oracle Server X5-2, WL 12.1.3 – 18800.76 SPECjEnterprise2010 EjOPS, 522.24 EjOPS per app core. 28-core(2×14-core processors) IBM System x3650 M5 Lenovo Server, WAS 8.5.5.4, DB2 10.5 – 19282.14 EjOPS, 688.65 EjOPS/core.
(2) 24-core IBM Power S824 (3.52 GHz) db running DB2 10.5 / 24-core IBM Power S824 (3.52 GHz) app running WebSphere 8.5, SPECjEnterprise2010 (22,543 Enterprise jAppServer Operations Per Second (EjOPS), 939EjOPS/core) .

Source: http://www.spec.org. All results current as of 2/23/15.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

technorati tags: , , ,,,,,,,,,

,,,

Written by benchmarkingblog

February 23, 2015 at 9:41 am