Main Menu

Access

Datacenters To Get A High Fiber Bandwidth Diet

Datacenters To Get A High Fiber Bandwidth Diet...

Read more: Datacenters To Get A High Fiber Bandwidth Diet

Storage Performance: Why I/O Profiling Matters

With a significant amount of annual IT budgets being spent on storage (25% by some estimates), the focus on making storage deliver to the needs of the business has never been greater.  However, despite the advances in flash technology, larger disk drives, the move to commodity and software defined storage, the problems faced by customers remain largely the same as they were 20 years ago.

In a recent report, Tintri Inc, a vendor of hybrid and all-flash storage appliances, highlighted the major issues confronting storage administrators today.  The top 4 continue to be (in order) – performance, capital expenses, scale and manageability.  Infinidat Inc, another hybrid array vendor found cost, reliability, operational complexity and performance to be the main headaches for the storage administrators in their survey.

From these studies it’s clear that a number of factors stand out; delivering high performing storage at a reasonable cost (both capital and operational) is an ongoing issue for pretty much every IT organisation.

Unfortunately, these problems (particularly performance) continue to arise because storage is complex.  The consolidation of storage onto external appliances that started around 20 years ago has brought us from a position of managing a few hundred terabytes of data to systems that can store multi-petabytes of information.  In parallel, the I/O density (IOPS per TB of storage) has increased with increasing server processor power.  The move to (server and desktop) virtualisation means workloads are becoming more random and more unpredictable in nature.

The availability and variety of storage products has also never been greater.  The traditional vendors are being attacked on all sides, with new start-ups, open-source and software defined solutions (including hyper-convergence).  Each of these platforms will have strengths and weaknesses when it comes to delivering storage I/O because by their nature they are architected in different ways using multiple m...

Read more: Storage Performance: Why I/O Profiling Matters

VMware Host Client (HTML5 based Web-Client) on Android

image The VMware Host Client is a HTML5 client that is used to connect to and manage single ESXi hosts without a vCenter Server. The Host client was initially created as a Fling, but made it to a supported component of vSphere 6.0 Update 2.

No Flash, no Windows-based C# Client – Shouldn’t it work on Android based smartphones and tablets? I’ve tried to manage a standalone Homelab ESXi Host with the Web-based Host Client and it works quite nice (with some tweaking).

Of Course, it’s not suitable for large platforms, as the vCenter Web-Client is still Flash based.

This the the Virtual Machine view on my 8″ Android tablet, without any modification. Looks okay, works but I’m not really happy with the layout as there are annoying tabs and the address bar. The whole site is non-zoomable and above the browser viewport:image

Most pages are working, and for some the vertical view looks better. Performance Charts and configuration changes are not a problem:image  image

However, I want to optimize it for my tablet/smartphone by removing the address bar and making it zoomable. To achieve this, I’ve investigated the source code.

In the head I’ve found the following meta tags:

 <!-- The initial, max and min scale, values are needed due to what appears
 to be a bug in iOS 9. We should be able to remove those once that bug
 is addressed by Apple. See https://forums.developer.apple.com/thread/13510 -->
 <meta name="viewport" content="
 width=device-width,
 initial-scale=1.0001,
 minimum-scale=1.0001,
 maximum-scale=1.0001,
 user-scalable=yes" />

 <!-- The following will hide the chrome on mobile Safari if the user has
 added a shortcut to their home screen, this is currently
 broken on iOS 9, awaiting a fix from Apple.
 See https://forums.developer.apple.com/thread/9819 -->
 <meta name="apple-mobile-web-app-capable" content="yes" />
 <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />

Well…looks like Apple products are common at VMware. I’m also a lit...

Read more: VMware Host Client (HTML5 based Web-Client) on Android

Upgrading Cisco UCS Fabric Interconnects

I have to do this first, as this is a high-risk change for any environment:

imageDISCLAMER: I ACCEPT NO RESPONSIBILITY FOR ANY DAMAGE OR CORRUPTION OF DATA THAT MAY OCCUR AS A RESULT OF CARRYING OUT STEPS DESCRIBED BELOW. YOU DO THIS AT YOUR OWN RISK.

And now to the point. Cisco has two generations of Fabric Interconnects with the third generation released just recently. There is 6100 series, which includes 6120XP and 6140XP. Second generation is 6200 series, which introduced unified ports and also has two models in its range – 6248UP and 6296UP. And there is now a third generation of 40Gb fabric interconnects with 6324, 6332 and 6332-16UP models.

We are yet to see mass adoption of 40Gb FIs. And some of the customers are still upgrading from the first to the second generation.

In this blog post we will go through the process of upgrading 6100 fabric interconnects to 6200 by using 6120 and 6248 as an example.

Prerequisites

Cisco UCS has a pair of fabric interconnects which work in an active/passive mode from a control plane perspective. This lets us do an in-place upgrade of a FI cluster by upgrading interconnects one at a time without any further reconfiguration needed in UCS Manager in most cases.

For a successful upgrade old and new interconnects MUST run on the same firmware revision. That means you will need to upgrade the first new FI to the same firmware before you can join it to the cluster to replace the first old FI.

This can be done by booting the FI in a standalone mode, giving it an IP address and installing firmware via UCS Manager.

The second FI won’t need a manual firmware update, because when a FI of the same hardware model is joined to a cluster it’s upgraded automatically from the other FI.

Preparation tasks

It’s a good idea to make a record of all connections from the current fabric interconnects and make a configuration backup before an upgrade.

image

If you have any unused connections which you’re not planning to move, it’s a good time to disconnect the cables and dis...

Read more: Upgrading Cisco UCS Fabric Interconnects

On the Topic of Lock-In

While talking with customers over the past couple of weeks during a multi-country/multi-continent trip, one phrase that kept coming up is “lock-in”, as in “we’re trying to avoid lock-in” or “this approach doesn’t have any lock-in”. While I’m not a huge fan of memes, this phrase always brings to mind The Princess Bride, Vizzini’s use of the word “inconceivable,” and Inigo Montoya’s famous response. C’mon, you know you want to say it: “You keep using that word. I do not think it means what you think it means.” I feel the same way about lock-in, and here’s why.

Lock-in, as I understand how it’s viewed, is an inability to migrate from your current solution to some other solution. For example, you might feel “locked in” to Microsoft (via Office or Windows) or “locked in” to Oracle (via their database platform or applications), or even “locked in” to VMware through vCenter and vSphere. Although these solutions/platforms/products might be the right fit for your particular problem/need, the fact that you can’t migrate is a problem. Here you are, running a product or solution or platform that is the right fit for your needs, but because you may not be able to migrate to some other platform at some point in the future you’re “locked in.” Therefore, in order to keep your options open, you feel like you have to choose a different solution, perhaps even settling for a solution that is less ideally suited to solve the problem(s) you’re trying to solve.

Based on this understanding, the key question that comes to my mind is this: what makes you think you can avoid lock-in?

The reality is that every single platform/solution/product out there has lock-in. They might have various levels of lock-in, but lock-in exists everywhere:

  • Linux has lock-in. (Gasp!) Don’t believe me? Ever run into a problem running a script because it contained idiosyncrasies specific to a particular distribution? Yes, these situations can be minimized, but the fact remains that there is some level of l...

Read more: On the Topic of Lock-In

What’s inside VMware vSphere 6.0 Update 2

VMware has just released vSphere 6.0 Update 2. Together with the Updates, the following product updates were released today:

If you want to get notified about new products, subscribe to my vTracker RSS Feed.

vSphere 6.0 Update 2 Features

  • High Ethernet Link Speed: ESXi 6.0 Update 2 supports 25 G and 50 G ethernet link speeds.
  • VMware Host Client: The VMware Host Client is a HTML5 client that is used to connect to and manage single ESXi hosts without a vCenter Server. The Host client made it from a Fling to a supported Product. Very nice!
  • vSphere APIs for I/O Filtering (VAIO): Enhancements made to VAIO includes supports for IPv6 and VMIOF versions 1.0 and 1.1.
  • Two-factor authentication for vSphere Web Client: Better Security with RSA SecurID and Smart card authentication.
  • Windows 10 Support for the vSphere Web Client

VMware vCenter Server 6.0 Update 2 Release NotesVMware ESXi 6.0 Update 2 Release Notes

Supported Hardware for vSphere 6.0 Update 2 (VMware HCL)

Good news from the VMware HCL. Support has not been dropped for any server. All servers that are supported for vSphere 6.0 Update 1 are also supported in vSphere 6.0 Update2.

VMware vRealize Log Insight for vCenter Server

VMware recently announced that all users with a vCenter Server License are entitled to use their vCenter Server License to get a 25-OSI pack for vRealize Log Insight, at no charge. The package is now available. A license key is provided at the Log Insight for vCenter download page. The packet is not limited to vCenter 6.0u2. Every existing and new vCenter Server customer is entitled.

Virtual SAN 6.2VMware vSphere 6.0 Update 2 contains new features for Virtual SAN including:

  • Deduplication
  • Compression
  • Failure Tolerance methods RAID-5/6
  • Sparse Swap Files for lower disk consumption
  • Quality of Service (IOPS limit for objects)
  • Integrated Performance Metrics (VSAN Observer similar insights for the Web Client)

VMware vCloud Director 8.0.1 releasedTogether with vSphere 6.0...

Read more: What’s inside VMware vSphere 6.0 Update 2

Learning From Rackspace About Bare Metal Clouds

Learning From Rackspace About Bare Metal Clouds...

Read more: Learning From Rackspace About Bare Metal Clouds

Subnet Pools with VMware NSX

Today’s blog post discusses how VMware NSX supports Neutron Subnet Pools. This article was written by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU).

Neutron, the OpenStack networking project, continues to evolve to support use cases that are relevant for the Enterprise. Early on, OpenStack networking was focused on delivering overlapping IP support for tenant subnets. Over time, more complex topologies have been added to Neutron. In some cases, the network administrators may want to be in charge of the IP scheme used by the consumers of an OpenStack private cloud. These and other options, are discussed in a recent SuperUser article published by Wells Fargo, as well as in the Neutron-NSX integration documentation.

In Kilo, a new feature called Neutron Subnet Pools was added to the OpenStack networking workflows (feature documentation). Neutron subnet pools allow an administrator to create a large Classless Inter-Domain Routing (CIDR) IP address range for a Neutron network, from which Tenants can create subnets without specifying a CIDR. In cases where valid, routable IPs are used, subnet pools are very useful. Tenants will only need to specify minimal configuration parameters for creating a subnet without worrying about the IP subnet on which the VMs/Instances will sit. Although subnet pools are not supported in Horizon (the OpenStack dashboard), they can be created via the CLI or API.

Here is an example on how to use subnet pools:

1. Let’s first create a Neutron network called TestNet. Please note that this network can be Shared, but it cannot be External. This is because Neutron subnet pools only apply to tenant networks but not to external networks (where Floating IPs reside):

~$ neutron net-create TestNet
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value...

Read more: Subnet Pools with VMware NSX

Ten Years of AWS and a Status Check for HPC Clouds

Ten Years of AWS and a Status Check for HPC Clouds...

Read more: Ten Years of AWS and a Status Check for HPC Clouds

Pure Storage Brings Petabyte Scale To All Flash

Pure Storage Brings Petabyte Scale To All Flash...

Read more: Pure Storage Brings Petabyte Scale To All Flash

VMware Product Lifecycle Calendar

General Support for vSphere 5.0 and 5.1 runs out this year. It is very important to keep an eye on support because products that are “EOS” do no longer receive security updates or bugfixes from VMware. To keep an eye on VMwares Lifecycle Product Matrix I’ve created a page that shows a countdown for each product.

VMware Product End Of Support Countdown

Features

Current DateProducts that are running out shortly are at the top of the table. For a clear view the current date is marked in the table. Products where EOS has been reached in the last 90 days stay visible on this pagimage

Product MouseoverDisplays General Availability, End of General Support, End of Technical Guidance and End of Availability dates for each product.image

Reminder in .ics FormatEach product has an .ics file associated that adds a Reminder to any compatible calendar. To download or open the iCalendar file just click on the EOS countdown....image

Read more: VMware Product Lifecycle Calendar