Migrate a VCF Hybrid vSAN Cluster to an All-Flash vSAN Cluster

Two weeks ago I published an article describing how to Migrate a Hybrid vSAN Cluster to an All-Flash vSAN Cluster. I received many positive comments on the article, thanks for that! And a few good questions too, one of those questions stood out because it was asked multiple times. The question was “How does this process look like if we want to migrate a VCF Hybrid vSAN Cluster to an All-Flash vSAN Cluster?”.

Migrate a VCF Hybrid vSAN Cluster to an All-Flash vSAN Cluster
 

Well that’s a good question and just like in the previous article I had no experience with this myself. And the VMware Cloud Foundation documentation (link) only describes the commissioning (link) and decommissioning of hosts (link). But it does not describe the procedure for replacing the Hybrid vSAN hosts in a, VCF managed, vSAN enabled cluster by All-Flash vSAN hosts. But even if the procedure would have been described in the documentation, I want to gain experience with this myself as well. Not from a theoretical standpoint (as it should work) but more from a practical standpoint as I also want to understand what the impact would be for the running workloads when you perform this action. Especially because we might need to perform this procedure on a VCF Hybrid vSAN Cluster in the Management Workload Domain of a VMware Cloud Foundation deployment. So, this article isn’t only to check the procedure. But also to check if there isn’t any impact on the workloads running on the VCF Hybrid vSAN Cluster where we want to perform the life cycle of the hosts!

So here we go again, TO THE LAB!

Lab Setup

  • Software used
    • VCF 4.1
    • vCenter 7.0 u1
    • vSphere 7.0 u1
    • vSAN 7.0 u1
  • Hosts – Hybrid – 1x 48GB Cache SSD 3x 48GB Capacity HDD
    • esxi-1.vrack.vsphere.local
    • esxi-2.vrack.vsphere.local
    • esxi-3.vrack.vsphere.local
    • esxi-4.vrack.vsphere.local
  • Hosts – All-Flash – 1x 48GB Cache SSD 3x 48GB Capacity SSD
    • esxi-5.vrack.vsphere.local
    • esxi-6.vrack.vsphere.local
    • esxi-7.vrack.vsphere.local
    • esxi-8.vrack.vsphere.local

Let’s get started

I’ve prepared VCF Management Workload Domain Cluster “SDDC-Cluster1” with the 4 Hybrid hosts and made sure the vSphere and vSAN Cluster status was in a healthy state.


 
Within the vSphere Client under Configure -> vSAN -> Disk Management the vSAN Disk Groups are shown. Also in the column “Type” you can see that the vSAN Disk Groups of the VCF Management vSAN Cluster are of the type Hybrid.
(more…)

Read More

Migrate a Hybrid vSAN Cluster to an All-Flash vSAN Cluster

For one of my projects we have to perform a lifecycle on the hosts within a Management Cluster. To make this work we have to migrate a Hybrid vSAN Cluster to an All-Flash vSAN Cluster. The current hosts in the Management Cluster have magnetic spinning disks and vSAN was therefore running in a vSAN Hybrid configuration. The new hosts are equipped with nice NVMe drives and vSAN should therefore be running in an All-Flash (AF) configuration to have all the nice features available like Erasure Coding, Deduplication and Compression.

 
Migrate a Hybrid vSAN Cluster to an All-Flash vSAN Cluster
 

I had never done this before and the vSAN documentation (link) describes only replacing the disks of the hosts, and does not describe the procedure for replacing the Hybrid vSAN hosts by All-Flash vSAN hosts.

Well this is a good opportunity to get my hands dirty again with some labbing. To the Lab!

Lab Setup

  • Software used
    • vCenter 7.0 u1
    • vSphere 7.0 u1
    • vSAN 7.0 u1
  • Hosts – Hybrid – 1x 8GB Cache SSD 1x 64GB Capacity HDD
    • dcesxi71.vmbaggum.local
    • dcesxi72.vmbaggum.local
    • dcesxi73.vmbaggum.local
  • Hosts – All-Flash – 1x 8GB Cache SSD 1x 64GB Capacity SSD
    • dcesxi74.vmbaggum.local
    • dcesxi75.vmbaggum.local
    • dcesxi76.vmbaggum.local

Let’s get started

I’ve prepared vSphere Cluster “CL-03” with the 3 Hybrid hosts and made sure the vSphere and vSAN Cluster status is healthy.

Migrate a Hybrid vSAN Cluster to an All-Flash vSAN Cluster 1
 
vSAN is enabled, and as you can see everything is pretty default.
(more…)

Read More

vSAN Memory Consumption Calculator

The vSAN Memory Consumption Calculator has been updated on 19-01-2021 to reflect vSAN 7.x after the KB has been updated and I got some messages that the calculator was not up-to-date anymore.

Disclaimer !! This calculator is officially not supported by VMware, please use this calculator for indications only. Please use the official vSAN sizer at https://vsansizer.vmware.com/ for vSAN sizing.

I ran into a strange issue today, a host was consuming way to much of RAM while running a single VM running on it. And that single VM was only configured with half of the RAM of the host?!? After some basic troubleshooting a colleague (thanks Satish) pointed me to the KB article KB2113954 which explained the issue.

vSAN Memory

The host in question was configured with 128GB of RAM and had 4 disk groups with 7 large capacity disks. If you do the math it required 48GB of RAM to run this vSAN configuration. In other words, it explained what we saw! vSAN was gobbling up more RAM than was anticipated!

To calculate vSAN memory consumption you use this equation:

vSANFootprint = HOST_FOOTPRINT + NumDiskGroups * DiskGroupFootprint

DiskGroupFootprint = DISKGROUP_FIXED_FOOTPRINT + DISKGROUP_SCALABLE_FOOTPRINT + CacheSize * CACHE_DISK_FOOTPRINT + NumCapacityDisks * CAPACITY_DISK_FOOTPRINT

Easy right? Well let me make it even easier for you:
 

All Flash vSAN Memory Consumption Calculator

 
 

Hybrid vSAN Memory Consumption Calculator

 
 
* vSAN scales back on its memory usage when hosts have less than 32GB of memory. Source: Link

Read More

Configure vSAN with iSCSI based disks

Every time I needed to test something on vSAN in my vLab, I had to spin up some dusty nested ESXi nodes because I don’t have the proper disks in my test NUCs. But now thanks to William Lam I found a way to allow vSAN to claim iSCSI disks and let them contribute to the vSAN Disk Group! This way I can leave the nested ESXi nodes powered off and very easily test disk and host failures and play around with the Ruby vSphere Console (RVC).

Below you will see a 3 node NUC cluster without any local storage running vSAN! Pretty cool stuff!
 

Configure vSAN
 

I really hope that I don’t have to explain this, but then again better safe than sorry. 🙂

Disclaimer !! This is officially not supported by VMware, please do not use this for production or evaluation.

Now that’s out of the way let’s get started.
 

Configure vSAN with iSCSI disks

Before you continue present the iSCSI LUNs to your ESXi hosts. But be aware that you don’t share the LUNs across the ESXi hosts, present a dedicated set of LUNs for vSAN per ESXi host.

Open the vSphere Web Client and mark one of the presented iSCSI LUNs as a Flash device.
Click on Hosts and Clusters -> Select a Host -> Configure -> Storage Devices -> Select the iSCSI LUN to be marked as SSD -> All Actions -> Mark as Flash Disk.

And voilà! The disk device is now marked as a Flash Disk instead of a Hard Disk.

Repeat the last step for all your ESXi hosts contributing storage to the vSAN Disk Group!

The next step is to SSH to your ESXi host and run the following command to allow iSCSI disks to be claimed by vSAN.

(more…)

Read More