Main Menu

Access

UCS 2.0: Cisco Stacks the Deck in Las Vegas

image July 15th, 2011

This week at CiscoLive 2011 in Las Vegas, Cisco announced new additions to the Cisco UCS fabric architecture. In addition to the existing UCS fabric hardware, UCS customers now have a choice of a new Fabric Interconnect, a new chassis I/O module, and a new Virtual Interface Card.  The 6248UP Fabric Interconnect delivers double the throughput, almost half the latency, and more than quadruple the virtual interfaces per downlink, while the new 2208XP chassis I/O module delivers double the chassis uplink bandwidth and quadruple the server downlinks.  Last but not least, the 1280 Virtual Interface Card (VIC) server adapter provides quadruple the fabric bandwidth for UCS blade servers by delivering two active 40 Gbps paths per server.

Did I mention these new announcements were additions to the UCS product portfolio, not replacements? I’m not sure I did, so I’ll repeat it… UCS customers now have three Fabric Interconnects, two chassis I/O modules, two Virtual Interface Cards, and multiple traditional network adapters to choose from – and they’re all interoperable.

In addition to the new fabric devices, the soon-to-be-released UCS 2.0 firmware adds several features for existing and future UCS customers: Support for disjoint Layer 2 networks, UCS Service Profile support for iSCSI boot, and support for VM-FEX on RedHat KVM.

 

image

Additions to the UCS Fabric Portfolio

The UCS 6248UP Fabric Interconnect

The UCS 6248UP Fabric Interconnect, similar to the Nexus 5548 platform, provides up to 48 Unified Ports in a single Rack Unit (1 RU). Unified Ports are ports that accept either Ethernet and Fibre Channel transceiver (SFP+/SFP) modules. As such, the 6248UP can provide practically any distribution of Ethernet or Fibre Channel uplinks need to meet a customer’s design and bandwidth needs.

image

Don’t let the tiny package fool you… While the 6248UP is the same size as the UCS 6120 Fabric Interconnect, 1 rack unit, the 6248UP delivers double the throughput, almost half the latency, mor...

Read more: UCS 2.0: Cisco Stacks the Deck in Las Vegas

Buyer beware: is your storage vendor sizing properly for performance, or are they under-sizing technologies like Megacaching and Autotiering?

With the advent of performance-altering technologies (notice the word choice), storage sizing is just not what it used to be.

I’m writing this post because more and more I see some vendors not using scientific methods to size their solution, instead aiming to reach a price point, hoping the technology will work to achieve the requisite performance (and if it doesn’t, it’s sold anyway, either they can give some free gear to make the problem go away, or the customer can always buy more, right?)

Back in the “good old days”, with legacy arrays one could (and still can) get fairly deterministic performance by knowing the workload required and, given a RAID type, know roughly how many disks would be needed to maintain the required performance in a sustained fashion, as long as the controller and buses were not overloaded.

With modern systems, there is now a plethora of options that can be used to get more performance out of the array, or, alternatively, get the same average performance as before, using less hardware (hopefully for less money).

If anything, advanced technologies have made array sizing more complex than before.

For instance, Megacachesreaching the back-end disks of the array. NetApp FAS systems can have up to 16TB of deduplication-aware, ultra-granular (4K) and intelligent read cache. Truly a gigantic size, bigger than the vast majority of storage users will ever need (and bigger than many customers’ entire storage systems). One could argue that with such an enormous amount of cache, one could dispense with most disk drives and instead save money by using SATA (indeed, several customers are doing exactly that). Other vendors are following NetApp’s lead and starting to implement similar technologies — simply because it makes a lot of sense.

However…

It is crucial that, when relying on caching, extra care is taken to size the solution properly, if a reduction in the number and speed of the back-end disks is desired.

You...

Read more: Buyer beware: is your storage vendor sizing properly for performance, or are they under-sizing...

NetApp Deduplication An In-depth Look

Authors: dan

There has been a lot of discussion lately about the NetApp deduplication technology, especially on twitter.  We had a lot of misinformation and FUD flying around, so I thought that a blog entry that takes a close look at the technology was in order.But first a bit of disclosure,  I currently work for a storage reseller that sells NetApp as well as other storage. The information in this blog posting is derived from NetApp documents, as well as my own personal experience with the technology at our customer sites.  This posting is not intended to promote the technology as much as it is to explain it. The intent here is to provide information from an independent perspective. Those reading this blog post are, of course, free to interpret it the way they choose.

How NetApp writes data to disk.

First lets talk about how the technology works.  For those who aren't familiar with how a NetApp array stores data on disk, here's the key to understanding how NetApp approaches writes.  NetApp stores data on disk using a simple file system called WAFL (Write Anywhere File Layout).  The file system stores metadata which contains information about the data blocks, has inodes that point to indirect blocks, and indirect blocks point to the data blocks. One other thing that should be noted about the way that NetApp writes data is that the controller will coalesce writes into full stripes when ever possible. Furthermore, the concept of updating a block is unknown in the NetApp world. Block updates are simply handled as new writes, and the pointers to the updated blocks are moved to point to the new "updated" block. 

How deduplication works.

First, it should be noted that NetApp deduplication operates on a volume level.  In other words,all of the data within a single NetApp volume is a candidate for deduplication. This includes both file data, and block (LUN) data that is stored within that Netapp volume.  NetApp deduplication is a post-process that occurs based on either a w...

Read more: NetApp Deduplication An In-depth Look

EMC FAST and NetApp FlashCache a Comparison

Authors: dan

Introduction

This article is intended to provide the reader with an introduction to two technologies,  EMC FAST and NetApp FlashCache. Both of these technologies are intended to improve the performance of storage arrays, while also helping to bend the cost curve of storage downward. With the amount of data that needs to be stored increasing on a daily basis, anything that addresses the cost of storage is a welcome addition to the data center portfolio.

EMC FAST

EMC FAST (Fully Automated Storage Tiering) is actually a suite made of of two different products. the first, called FAST Cache operates by keeping a copy of "hot" blocks of data on SSD drives. In effect it acts as a very fast disk cache for data that is currently being accessed while the data itself is being stored on either 15K SAS or 7200 RPM NL-SAS (SATA) drives.

FAST Cache provides the ability to improve the performance of SATA drives, as well as to turbo charge the performance of fiber channel and SAS drives as well. In general, this kind of technology helps to divide performance from spindle count, which helps drive down the number of drives required for many workloads, thus driving down the cost of storage, and the overall TCO of storage.

image

The other product in the FAST suite is FAST Virtual Pool.  This is the product that most people associate with FAST since it is the one that leverages  three different disk technologies, SSD, high speed drives such as 15K RPM SAS, and slower high capacity drives such as 7200 RPM NL-SAS. By placing only data that requires high speed access on the SSD drives, data that is receiving a moderate amount of access on the 15K SAS drives, and putting the rest on the slower, high capacity disks EMC FAST is able to drive the TCO of storage downward.

image

NetApp FlashCache

NetApp approaches the overall issue of improved performance while simultaneously driving down the TCO of storage in a different way. NetApp believes that using fewer disks to store the same amount of data is...

Read more: EMC FAST and NetApp FlashCache a Comparison

The Tale of Two Cisco UCS Predictions

imageMay 25th, 2011

If Charles Dickens were to write a book about it, it would begin like this: “It was the best of predictions, it was the worst of predictions, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief from a new comer, it was the epoch of incredulity from an incumbent…”

Of course, I’m referring to the UCS market share predictions from Cisco’s CEO, John Chambers compared to the UCS statements made by HP’s senior management. Let’s summarize each side’s statements before we dive into the hard facts.

Prediction #1: John Chambers, Cisco’s CEO, on September 14th, 2010 said: (1)

“UCS has already taken the #3 market share spot in US/Canada for x86 blade servers.”-and-“Cisco expects UCS to be 50% the market share of the #2 competitor for the worldwide x86 blade server market within the next 2 quarters.”

Prediction #2: Randy Seidl, HP Senior VP, on April 26th, 2010 said: (2)

“A year from now the difference is (Cisco) UCS (Unified Compute System) is dead… “

Extra Credit: Leo Apotheker, HP’s CEO, on March 16th, 2011 said: (3)

Gallant:         “Speaking of Cisco in the server market, is it a threat or an annoyance to HP?”Apotheker:   “Neither.”Gallant:         “Can you expand?”Apotheker: “We hardly ever see [Cisco UCS].”Gallant:        “They claim that sales are growing pretty rapidly of the UCS system, but you’re not                              seeing them in competitive situations?”Apotheker: “They must be selling on planet Zircon.”

_

Yesterday, May 24th, 2011, IDC announced the results of the Q1 2011 Worldwide Quarterly Server Tracker. (4) The results clearly proved the credibility of Mr. Chambers while doing exactly the opposite for HP’s Siedl and Apotheker. Here is a graphical breakdown of the results:

_

image

_

The chart above (5) shows x86 Blade Server Market share for both Worldwide and US sales. These results validate both of John Chambers’ statement thatCisco has taken the #3 market share spot in the US/Canada for x86 blade servers and that “Ci...

Read more: The Tale of Two Cisco UCS Predictions

Flash Storage and Automated Storage Tiering

Authors: dan

In recent years, a move toward automated storage tiering has begun in the data center. This move has been inspired by the desire to continue to drive down the cost of storage, as well as the introduction of faster, but more expensive storage in the form of Flash memory in the storage array marketplace. Flash memory is significantly faster than spinning disks, and thus it’s ability to provide very high performance storage has been of interest. However, its cost is considerable, and therefore a way to utilize it and still bend the cost curve downward was needed. Note that Flash memory has been implemented in different ways. It can be obtained as a card for the storage array controller, or as SSD disk drives, and even, as cache on regular spinning disks. However it is implemented, it’s speed and expense remains the same.

Enter the concept of tiered storage again. The idea was to place only that data which absolutely required the very high performance of Flash on Flash, and to leave the remaining data on spinning disk. The challenge with tiered storage in the way that it has been defined in the past was that it meant that too much data would be placed on very expensive Flash since traditionally an entire application would have all it’s data placed on a single tier. Even if only specific parts of the data at the file, or LUN level were placed on Flash, the quantity needed would still be very high, thus driving the costs of for a particular application up. It was quickly recognized that the only way to make Flash cost effective would be to place only the blocks which are “hot” for an application in Flash storage, thereby minimizing the footprint of Flash storage.

The issue addressed by automated storage tiering is that you no longer need to know ahead of time what the proper tier of storage for a particular application’s data needs to be. Furthermore the classification of the data can occur at a much more fine-grained block level rather than the file or the L...

Read more: Flash Storage and Automated Storage Tiering

Cisco UCS B-Series NIC Teaming Bonding OS Support Matrices

imageMay 17th, 2011

Recently, a couple of Twitter pals of mine, @veverything and @ChrisFendya, discussed VMware KB article 1013094 that stated an unsupported teaming type for the Cisco UCS B200 blade server. Their discussion made me realize that there wasn’t an easy-to-find resource for Cisco UCS customers that needed to know “OS Team Types Supported Per UCS Network Adapter” and “Unsupported Team Types”.

Below you will find both of these resources. I’ve listed a chart per Cisco UCS B-Series Network Adapter that shows the teaming/bonding types that will work per Operating System. Lastly, you’ll find a chart showing the team types that will not work.

Please keep in mind the following:

  1. Just because I say it works, doesn’t mean Cisco, Intel, Broadcom, QLogic, Emulex, Linux, VMware, Microsoft, etc. supports it. I’m listing the teaming/bonding types that will work based on my experience and based on what I know about NIC Teaming and the UCS architecture.___
  2. This article isn’t intended to teach you which teaming/bonding type to use and I’m not necessarily making recommendations on the use of specific team types. I’m simply listing team types that should work if you choose to use them. I have another article queued up that discusses NIC Teaming/bonding in detail and helps you decide which one to use within a UCS environment. That’s coming in the near future. In the meantime, if you are really jonesing to learn more about NIC Teaming in general, you are welcome to read through a lengthy whitepaper I wrote in a former life HERE. While it’s specific to HP NIC Teaming, many of the concepts are the same across Intel, Broadcom, etc.___
  3. When it comes to NIC Teaming, Hyper-V is a special case. Since network redundancy is left to the NIC Teaming vendor (vs. the VMware model where a vSwitch handles network redundancy), it’s important to use an OEM NIC Team type that is “VM aware”…or a team type that will GratARP the VM’s MAC address after a NIC failover. If a NIC Teaming vendor (e.g. Emule...

Read more: Cisco UCS B-Series NIC Teaming Bonding OS Support Matrices

NetApp vs EMC usability report: malice, stupidity or both?

Most are familiar with Hanlon’s Razor:

Never attribute to malice that which is adequately explained by stupidity.

A variation of that is:

Never attribute to malice that which is adequately explained by stupidity, but don’t rule out malice.

You see, EMC sponsored a study comparing their systems to ones from the company they look up to and try to emulate. The report deals with ease-of-use (and I’ll be the first to admit the current iteration of EMC boxes is far easier to use than in the past and the GUI has some cool stuff in it). I was intrigued, but after reading the official-looking report posted by Chuck Hollis, I wondered who in their right mind will lend it credence, and ignored it since I have a real day job solving actual customer problems and can’t possibly respond to every piece of FUD I see (and I see a lot).

Today I’m sitting in a rather boring meeting so I thought I’d spend a few minutes to show how misguided the document is.

In essence, the document tackles the age-old dilemma of which race car to get by comparing how easy it is to change the oil, and completely ignores the “winning the race with said car” part. My question would be: “which car allows you to win the race more easily and with the least headaches, least cost and least effort?”

And if you think winning a “race” is just about performance, think again.

It is also interesting how the important aspects of efficiency, reliability and performance are not tackled, but I guess this is a “usability” report…

Strange that a company named “Strategic Focus” reduces itself to comparing arrays by measuring the number of mouse clicks. Not sure how this is strategic for customers. They were commissioned by EMC, so maybe EMC considers this strategic.

I’ll show how wrong the document is by pointing at just some of the more glaring issues, but I’ll start by saying a large multinational company has many PB of NetApp boxes around the globe and 3 relaxed guys to manage it all. How’s that for a real example?

  1. Page 2,...

Read more: NetApp vs EMC usability report: malice, stupidity or both?

The Cisco UCS Advantage Series

image
The Cisco UCS Advantage Series | M. Sean McGee

The Cisco UCS Advantage Series...

Read more: The Cisco UCS Advantage Series

Cisco’s Stocking Stuffer for UCS Customers: Firmware Release 1.4(1)

imageDecember 20th, 2010

Santa came early this year for Cisco UCS customers. Today, Cisco released UCS firmware version 1.4(1).  This release is the single most impressive feature enhancement release I’ve seen in all my 11 years of working on blade servers.  Allow me to walk you through this list of new features and provide a deeper dive into some of the details behind each one.

Note: The Release Notes are posted here: http://www.cisco.com/en/US/partner/docs/unified_computing/ucs/release/notes/OL_24086.html

Server Platform and Management Enhancements:

  • __

    _

  • image

    UCS C-Series Rack server integration into UCS Manager – Unified Management for the entire UCS portfolio

    Yes, you read me right – Cisco is the 1st server vendor to integrate rack server management into the “blade server and blade chassis management” management interface so that a single management tool configures and monitors both your blade and rack servers. This initial release includes support for the C200, C210, and the C250 Cisco UCS rack servers. Support for additional Cisco UCS rack servers will be added in the near future.

    UCS Manager features extended to C-Series Rack servers include: Service Profiles, Service Profile migration between compatible B-Series and C-Series servers, automated server discovery, fault and monitoring, firmware updates, etc.______

  • Chassis and multi-Chassis power capping for UCS B-Series Blade Servers

    Cisco has enhanced the facility manager’s control over UCS blade server power consumption by adding Group Level Power Capping, Dynamic Intra-chassis power redistribution, and Service Profile Priorities. Within the data center, power should be distributed to a blade chassis or groups of blade chassis, not to individual blade servers. If a server is “statelessly” moved using a Service Profile from one chassis to another, a statically defined power cap per server is mostly useless. What if you moved a bunch of servers with static power caps (in watts) to the same power distribution unit (PDU) – the s...

Read more: Cisco’s Stocking Stuffer for UCS Customers: Firmware Release 1.4(1)

Dell Buys 3PAR and Monolithic vs. Modular Storage

Authors: dan

Well, it’s been a while since I blogged, but something happened today that warrants comment.Dell has offered to buy 3PAR for about $1.1 billion. So, a number of my customers have called and emailed me asking what this all means? They want to know how I view the addition of 3PAR to Dell’s storage portfolio? What does this mean for the storage industry, and should they seriously start/stop looking at 3PAR? What about all this discussion about monolithic vs. modular storage? Is 3PAR really Tier-1 storage?

From a Sales Perspective

So, what does the fact that Dell has paid a lot of money to get 3PAR mean to those who are buying storage out there? Certainly 3PAR has been one of the innovators in storage ever since it appear back in 1999 bring things like thin provisioning and tiered storage to market. The question is, will Dell leave 3PAR alone as a business unit to continue to operate pretty much as they have in the past?

Obviously, the fact that 3PAR was on the block for sale says that they weren’t exactly burning it up, so I would expect Dell to make some changes. For example, 3PAR wasn’t the most channel friendly storage company in the world. They preferred to sell direct, especially to larger customers. I expect that this might change once Dell management starts to make more of the decisions at 3PAR. Dell depends a lot on the channel, and certainly they expect integrated sales. In other words, Dell expects that sales to their bigger clients be integrated between servers, storage, and desktops where possible, etc. HP and IBM tend to do the same thing. Once you let in the IBM server guy, for example, expect IBM storage to be right behind, and that and “integrated offering of servers and storage” will get pushed at the highest (CIO) levels of your organization.

My view of this is that it’s never a good thing, since HP, IBM, and now Dell have strengths and weaknesses in their different lines, and just because I happen to think that, say, HP servers are the b...

Read more: Dell Buys 3PAR and Monolithic vs. Modular Storage