benchmarkingblog

Benchmarking and Systems Performance

Oracle’s SPARC Enhancements: Construction or Wind ?

leave a comment »

Two nights ago I spent a lovely 6 hours in the airport. Flight cancelled, next plane delayed for incoming aircraft, no runways to be had in one of the largest airports in the country. Announcement 1: There was only one runway because the others were under construction. Announcement 2: There was only one runway that could be used because the wind patterns were strange.

All you want is to get home to your couch and your dog. At the same time it would be great to get the real story on what is happening. Just because you want to know, you want it to make sense.

And that’s exactly how I was feeling again as I read one of Oracle’s recent press releases on the Fujitsu SPARC M10 “enhancements.” The claim was for “15 world records.” I decided to take a look at each one just to know — was it the construction or the wind ?

1. Oracle needed 2.5x more cores/memory than IBM. The IBM result was from 4 years ago.
2. Oracle needed 2x more cores/memory than IBM. The IBM result was from 4 years ago.
3. Oracle compared themselves with themselves.
4. Oracle compared themselves with themselves.
5. Oracle needed 2x more cores than SGI.
6. Oracle compared themselves with themselves.
7. Oracle needed 2x more cores than IBM.
8. Oracle compared themselves with themselves.
9. Oracle needed 4x more cores than IBM.
10. Oracle compared themselves with themselves.
11. Oracle picked on little x86.
12. Oracle compared themselves with themselves.
13. Oracle needed 16x more cores than IBM. The IBM result was from 6 years ago.
14. Oracle needed 8x more cores than IBM. The IBM result was from 6 years ago.
15. Oracle needed 8x more cores than IBM. The IBM result was from 6 years ago.

Also note that there are really only 4 different benchmarks here. And notably all but 2 of these 15 are in the Technical Computing space, using simple component type benchmarks.

So that’s the real story. The other real story is that if I had driven the 500 miles I would have been home much faster.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,

,

Written by benchmarkingblog

April 11, 2014 at 2:49 pm

Posted in SPARC

Tagged with , , ,

Digging into SAP HANA on HP

leave a comment »

All of the snow has finally melted in my backyard. And what that means to me is that I don’t have to shovel for awhile. What that means to my dog is another story.

It’s a field day for him. Now he can get back to what he surely thinks is his real job – digging in the dirt. And what has he found? A plastic flower pot that he can chew on. 3 beat up tennis balls from the summer. A soup bone from 3 months ago. Treasures.

HP just announced this morning that they are delivering a “System with Faster Analytics Engine for SAP HANA Environments.” HP claims a 2x performance advantage over other solutions. Let’s dig into this claim and take a look at the facts:

  • The performance claim is based on the SAP BW Extended Mixed Workload Benchmark, a benchmark with only 4 results. And 3 of those results are from, you guessed it, HP.
  • The SAP BW-EML benchmark results that HP references in their footnote in this press release are from September 2013 — a lifetime ago in the benchmark world.
  • The HP system they reference in the press release is not even the system that is in the benchmark. HP’s new system is not even available.
  • Even if you did try to compare the HP and IBM results, it does not make sense. The HP and IBM results are in different categories of the benchmark, using a different number of records. The HP result used an application and database tier; the IBM result is on a central server.
  • Even if you ignored the fact that it doesn’t make sense to compare these results, if you do, the HP result used 2.5x the processing cores, 2x the memory, 4x the L1 cache, 2x the L2 cache per core. So the IBM result actually has 28% better throughput per core than the HP result.(1)

Please HP, dig more and come up with some new exciting treasures next time so we readers are not left in the dirt.

************************************************

(1) IBM Power System 7 Express Server Central Server: IBM Power System 750 Express Server, 4 processor / 32 cores / 128 threads, POWER7+, 4.06GHz, 32KB(D) + 32KB(I) L1 cache and 256KB L2 cache per core, 10MB L3 cache per core, 512GB main memory; certification #2013020 (SUSE Linux Enterprise Server 11, DB2 for I 7.1, SAP NetWeaver 7.30, 66,900 ad-hoc navigation steps/hr) vs.
SAP BW-EML Benchmark: Database tier: HP ProLiant DL580 G7, 4 processor / 40 cores / 80 threads, Intel Xeon Processor E7-4870, 2.40GHz, 64KB L1 cache and 256KB L2 cache per core, 30MB L3 cache per processor, 512GB main memory; Application tier: HP ProLiant BL680 G7, 4 processor / 40 cores / 80 threads, Intel Xeon Processor E7-4870, 2.40GHz, 64KB L1 cache and 256KB L2 cache per core, 30MB L3 cache per processor, 512GB main memory; certification #2013027 (SUSE Linux Enterprise Server 11,SAP HANA 1.0,SAP NetWeaver 7.30,129,930 ad-hoc navigation steps/hr)
http://www.sap.com. Results current as of 3/19/14.

SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Other names may be trademarks of their respective owners.

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,

Written by benchmarkingblog

March 19, 2014 at 12:19 pm

Posted in HP, SAP

Tagged with , ,

With Manatees and SDE, Optimizing the Environment

leave a comment »

Last week I had the opportunity to be in the warmest place on earth — for manatees that is. About 30 miles outside of Tampa, Florida, is the Manatee Viewing Center. It is actually part of a power plant which is why when it’s so so cold outside it is so so popular with manatees. When Tampa Bay reaches 68 degrees or colder, the mammals seek out a canal by the power plant filled with warm saltwater. This area is now a state and federally designated manatee sanctuary that provides critical protection from the cold for manatees.

At the Manatee Viewing Center, everywhere you go is manatees. On the day I went, you could see hundreds of manatees in the water, sucking on barnacles and soaking up the sun. Manatees are everywhere. Pictures of manatees at the entrance, manatee billboards to take your picture with, plastic manatees near the picnic tables. A manatee Instagram extravaganza. And of course, as a sucker for cute gift shops, I came home with manatee postcards, a manatee key chain, manatee pajama shorts and an awesome manatee sun dress.

The Manatee Environment got me to thinking about some terminology we are using a lot these days in IT — Software Defined Environment.

SDE_yellow_circle_graphic

The Internet of Things is changing the way businesses interact, engage and transact with consumers. These new-age interactions have to be supported by up-to-the-minute, programmable infrastructure. This movement from static, legacy infrastructure to infrastructure-on-demand is software defined environment.

A Software Defined Environment (SDE) optimizes the entire computing infrastructure — compute, storage and network resources — so that it can adapt to the type of work required.

In today’s environment, resources are assigned manually to workloads; that happens automatically in an SDE. In an SDE, workloads are dynamically assigned to IT resources based on application characteristics, best-available resources, and service level policies to deliver continuous, dynamic optimization.

Integration, automation and optimization. And those are enablers to some of our most important IT applications today: Cloud delivery and Big Data analytics. SDE basically has the capability to accelerate business success by integrating these workloads and resources so you have a responsive, adaptive environment.

So SDE really is an end-to-end view comprised of Compute (SDC), Storage (SDS), and Network (SDN). SDE totally brings to life the old Service Level Agreement (SLA) concept we know and love — and puts it on steroids.

SDE is not achieved overnight but is built over time. The three phases or levels of SDE that are essentially our goals to achieve are:

  • 1. Open virtualization of resources across domains
  • 2. Policy-based optimization and elastic scaling
  • 3. Application-aware infrastructure

Patterns, which are best practice connections between applications and service levels, determine policies for workloads. These policies focus on many non-functional requirements including performance and security. For instance, you could have a silver level policy or SLA that employs PowerVM, Storwize, and certain switches. Or z/VM, SmartCloud, GPFS, OpenFlow . . . And this policy could easily and automatically change based on application or compute, storage, or network changes. Because of this holistic focus, IBM is uniquely able to assist with SDE hardware, software, and services.

SDE is a mosaic of technologies that outlines an emerging data center operating model; one that is increasingly at the core of creating a successful and sustainable cloud service business model.

Like the manatees automatically seeking out the warm waters, with SDE we can automatically achieve the best for our workloads.

Ultimately, for all of us, it’s all about optimization. Of whatever environment we’re in.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,

,,,,,,,,

Written by benchmarkingblog

January 29, 2014 at 9:44 am

Posted in SDE

Tagged with

Come on Oracle, Get “With It” Benchmarking

leave a comment »

I admit that many weekends this time of year you will find me (when I’m not enslaved by the leaf blower) curled up with a good old book on the old couch with my thankfully not so old dog.

But this weekend I truly was “with it.”

On Saturday night I attended one of the most sought-after sold out concert events ever to hit this town. I got to see a Pink concert that included not only 17 of her best songs but Pink flying through the air doing acrobatics that you simply would not believe. A rock concert rolled right into the circus, truly amazing.

And then to top it all off, on Sunday night I attended one of the most sought-after sold out movie events of the year. I got to see the latest Hunger Games, Catching Fire, second in this awesome trilogy which could be even more popular than the first.

So on Monday when I saw the latest Oracle SPARC T5-4 benchmark result on the TPC-H decision support benchmark (1), all I could think was how so “not with it.”

Like Gangnam Style this year. Or What Does the Fox Say this month.

Hey, I’m the first one to like legacy. My closet is filled with vintage looks. I love retro — just not when it comes to benchmarks.

Here is what you need to know.

  • First of all, this is TPC-H. Yawn. We’re ready for something new here.
  • Most of the TPC-H results are grayed out in this category, considered “historic.” This result is right next to a result from IBM — from 2007 (yes you heard that right).
  • Total Storage to Database Size ratio is a massive 60.80. Talk about overkill on storage to achieve performance. This number is many many times the ratio we’ve seen from other results.
  • Load time is a whopping 9.63 hours.
  • 128 query streams are needed. Most results use many, many fewer. That’s because TPC-H has a limited number of query variations; so when you run a lot of streams, you have a high probability that the same queries will be requested more than once. Oracle is greatly increasing the probability that they will have the results of the queries stored in their cache — which may not be representative of how their product would perform in a truly ad hoc query environment.
  • Oracle once again included extremely minimal support in their pricing. Does $2300 a year sound like what you are paying for software support?

************************************************
(1) Oracle TPC-H of 377,594 QphH@10000GB,$4.65 per QphH,Availability 11/25/13,Oracle Database 11g R2 Enterprise Edition w/Partitioning,SPARC T5 3.6 GHz; Total # of Processors: 4,Total # of Cores: 64,Total # of Threads: 512.
Source: http://www.tpc.org. Results current as of 11/25/13.

TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,,,,

,,

Written by benchmarkingblog

November 26, 2013 at 11:28 am

Posted in SPARC T5, TPC-H

Tagged with , ,

Why IT Infrastructure Really Really Matters

leave a comment »

I went apple picking with the dog last weekend. The orchard was sodden with rain but the trees were heavy with beautiful fruit. I picked one, took a bite. My lab took many bites. The Melrose apple had beautiful red skin and lovely white fruit and was incredibly crisp.

But the taste was not so sweet. And certainly not as sweet as previous years. I later heard that this year, because of various aspects of the infrastructure (like temperature and rainfall in this case), none of the types of apples have been as sweet.

Infrastructure matters.

When we talk about things that really matter to us in our business – like availability of our systems, security of our business, performance of our applications – ultimately we are talking about satisfaction of our most important entity, our customers.

Analytics

What drives these nonfunctional requirements of our business ends up being our underlying infrastructure. So in the end, our IT infrastructure plays a critical role in our success.

Just like the proliferation of pumpkins lately, IBM has a slew of awesome announcements today that address this critical IT infrastructure. Power Systems and Smarter Storage, as well as PureSystems and other IBM technologies, bring together industry leading capabilities for the best enterprise-class infrastructure with virtualization and cloud technology including:

  • Enterprise-class systems: Leadership performance, resilience and resource sharing
  • Enterprise-class Virtualization and Cloud Management
  • Flexible, efficient workload deployment with Elastic Capacity on Demand (COD) and Power Integrated Facility for Linux (IFLs)
  • Power Enterprise Pool with Mobile COD delivers unprecedented availability, security, flexibility
  • Big Data and analytics focus: IBM BLU Acceleration Solution – Power Systems Edition.
  • Cloud storage and Storwize offerings for efficiency and value.

Analytics

Infrastructure was never so sweet.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,

,,,,,,,

Written by benchmarkingblog

October 8, 2013 at 12:05 pm

Posted in announcement

Tagged with

Guns and Butter at OpenWorld

with one comment

I guess when you are really really rich you can do things like miss your own keynote to go to a sporting event. Or get prices wrong by millions of dollars.

Yes, I took Econ 1A in college (though I may remember more about the cute boy in the row in front of me than supply and demand). I clearly remember grasping the intricate graphs and complex formulas in the thick colorful book by Samuelson.

But that preparation did not seem to help this week in trying to understand the new Oracle “Economics” at OpenWorld. A quick search did not lead to any scholarly articles on “near linear pricing.” If there is any sort of “re-engineering” of economics going on, it has not been picked up by the MBA programs just yet.

So when you see any pricing comparisons from Oracle these days, here is what you need to know:

  • Sometimes the systems compared have different numbers of processor cores. Sometimes the systems are the same “size” but size does not equal the performance of what can be run on the system.
  • Sometimes the systems compared have different amounts of memory. Sometimes the systems have the same amount of memory but amount of memory does not equal the performance of what can be run on the system.
  • Sometimes Oracle includes no software on their system and includes software on the other vendor’s system.
  • Sometimes Oracle does not include the expensive Oracle database license costs, which by the way are calculated by core.
  • Sometimes the systems compared have very very different types of support and maintenance.
  • Sometimes the systems compared have very different types and amounts of storage included. Or no storage at all. As we know, storage can be a large part of a system’s configuration and price.

There has been absolutely NO substantiation to justify equivalent price configurations for equivalent throughput systems in these comparisons.

What is ultimately important is what non-functional requirements the system gives you at a certain price. Compare, and do the TCO. And tell Oracle: I don’t buy sockets, I buy performance.

************************************************

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,

,,,,,,,

Written by benchmarkingblog

September 25, 2013 at 10:36 am

Posted in Oracle

Tagged with ,

Born to Run Benchmarks

with 9 comments

With apologies to Bruce, you can’t start a fire with a SPARC. A fire of proof points, that is.

In two different instances Oracle’s recent announcements on SPARC benchmark data have been lacking — and certainly couldn’t start any flame of passion at OpenWorld.

The first involved the announcement of the SPARC M6-32 server and engineered system. The press release only had a footnote for “estimated” performance of some mysterious sort. Oracle’s benchmark website actually discussed some benchmarks for this new system — but 1) there was no competitive information and 2) they were on Oracle’s very own benchmarks.

In the second case, the SPARC T5-8 was highlighted on the Java end-to-end SPECjEnterprise2010 benchmark. A record was claimed — in actuality, the IBM Power 780 had 19% greater overall performance and 49% greater application server performance per core than the Oracle system.(1)

Additionally, keep in mind that whenever costs are presented in Oracle’s comparisons, they need to be scrutinized to the highest degree. What storage is included, what software is included, what support and maintenance is included? Is an apple being compared to a pineapple?

(P.S. After I wrote this I discovered that today is actually Bruce Springsteen’s birthday. How weird is that?)

************************************************

(1)SPECjEnterprise2010 result of 36,571.36 on 1 x SPARC T5-8 (8 chips, 128 cores, 3.6 GHz SPARC T5);Oracle WebLogic 12c (12.1.2);Oracle Database 12c (12.1.0.1) vs. IBM result of 10,902.30 on 1 x IBM Power 780(8 chips, 32 cores, 4.42 GHz POWER7+);WebSphere Application Server V8.5;IBM DB2 Universal Database 10.1; Source: http://www.spec.org. Results as of 9/23/13.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

SPEC, SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPECjEnterprise, SPECjvm, SPECvirt, SPECompM, SPECompL, SPECsfs, SPECpower, SPEC MPI and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation (SPEC).

The postings on this site solely reflect the personal views of the author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.

technorati tags: , , ,,,,,,,,,,

,,,,,,

Written by benchmarkingblog

September 23, 2013 at 9:22 pm

%d bloggers like this: