Limit the number of VTEPs for NSX

Everyone who has deployed NSX for vSphere must have configured the VXLAN Transport Parameters.


Nothing really fancy about this and pretty straight forward. But, what if you want to limit the number of VTEPs for NSX due to a specific requirement of the deployment? The number of VTEPs is not editable in the UI as described in the NSX documentation:

The number of VTEPs is not editable in the UI. The VTEP number is set to match the number of dvUplinks on the vSphere Distributed Switch being prepared.

So when you configure the VXLAN Transport Parameters for a host that is connected to a vSphere Distributed Switch (vDS) with 6 dvUplinks it will automatically create 6 VMkernel interfaces.


But but… (due to questionable requirements) we need it only with 2 VMkernel interfaces on a vDS with 6 dvUplinks, how can we solve this? Well fire up your PowerNSX.

Limit the number of VTEPs for NSX with PowerNSX

Not familiar with PowerNSX? Well you should be, PowerNSX is a PowerShell module that contains PowerShell functions that can call the VMware NSX for vSphere API. It will make you life so much more easier and is almost indispensable when consistency and speed is key. Here you can find how to install PowerNSX and here you can find how to use PowerNSX.

Please alter the script below to match your environment.

After the script has run successfully, the result will be only 2 VMkernel interfaces instead of the “default” 6 VMkernel interfaces:


Don’t forget to configure the uplink assignment on the VTEP portgroup afterwards and set the uplinks to “Active” where the VTEP VLANs are configured on and the rest to “Unused”.


Thanks to Alexander Ries for helping with the script.

Read More

VMworld Barcelona 2017

While sitting on my couch browsing through emails on my phone and it suddenly went *DING* and a new email appeared with the subject “VMworld Blogger Pass, Barcelona – YOU’RE GOING!
Wait… What?? Woohoo! VMworld Barcelona 2017 here I come!

VMworld Barcelona 2017

This is the first time I’m selected to receive a VMworld Blogger Pass by the vExpert Team and I’m truly grateful for this opportunity.

While the VMware Blogger Pass covers the full conference pass but it doesn’t cover the other expenses such as flights and hotels. Luckily for me my employer ITQ sees VMworld as an opportunity to attend and share interactive sessions and group discussion with peers from the IT world.

But if you still need a good justification for your boss to go, read this article from Fabian Lenz called “Why VMworld 2017” that contains some good reasons to convice your boss. Or even better download the “Convince Your Manager” letter from the VMworld site!

Some handy links for VMworld:

I hope to see you in Barcelona to chat about whatever (even non-VMware related stuff 😉 ), just say hi if you see me or hit me up on twitter @vMBaggum.

Adiós y hasta pronto!

Read More

Change the vmnic order on vSphere 6.x

There is always one ESXi host who thinks he’s special and therefore has a different configuration than its siblings. This week it was a brand spanking new UCS blade server that didn’t want to play nice and the result was a different vmnic order.

I’ve done this fix several times before with the older versions (< v5.5) of vSphere but now it was vSphere 6.0 so the KB article 2019871 that describes how to do this up to version 5.1 did not apply any more. But all the way at the bottom there is a link to KB article 2091560 that describe how to do this with vSphere 6.x!

How to change the vmnic order

Log on to your “special” ESXi host with your favorite SSH client.

Run the following command to see the current assignment of aliases to device locations:

The output will look as follows:

Then to reassign an alias run the following command:

For example, if you want to swap vmnic3 and vmnic4 use the following commands:

After you re-assigned the aliases perform a clean reboot of the ESXi host and you’re done!

Read More

NSX IPv6 Support

This week I got some good questions from a customer about NSX, especially on NSX IPv6 support.


And I knew which features are not supported:

  • Distributed logical router: The DLR does not support IPv6 forwarding / routing.
  • Dynamic routing (OSPF, BGP): Only IPv6 static routes are supported on the Edge Services Gateway.
  • NAT, SLAAC and DHCPv6 on NSX Edge: The workloads should use static IPv6 address allocation.
  • But I couldn’t immediately answer the question which components of NSX supported what connectivity like IPv4, IPv6 or dual stack. To make things worse the NSX 6.2 Documentation Center does not contain a lot of information about the IPv6 support… Luckily for me (and the customer) some insiders provided me the necessary information and I would like to share this with you.

    Detailed NSX IPv6 Support


    Component Feature Support1 Notes
    VM Addressing
    Guest VM Addressing IPv4, IPv6, DS VXLAN encap packets are capable of carrying IPv6 payload. VMs can have only IPv6 static addresses. SLAAC (RA) and DHCPv6 (relay and server) are not supported
    VXLAN Transport IPv4
    NSX Manager
    NSX Manager IP IPv4, IPv6, DS
    NSX Controller
    Management IP IPv4

    Read More

    Configure vSAN with iSCSI based disks

    Every time I needed to test something on vSAN in my vLab, I had to spin up some dusty nested ESXi nodes because I don’t have the proper disks in my test NUCs. But now thanks to William Lam I found a way to allow vSAN to claim iSCSI disks and let them contribute to the vSAN Disk Group! This way I can leave the nested ESXi nodes powered off and very easily test disk and host failures and play around with the Ruby vSphere Console (RVC).

    Below you will see a 3 node NUC cluster without any local storage running vSAN! Pretty cool stuff!

    Configure vSAN

    I really hope that I don’t have to explain this, but then again better safe than sorry. 🙂

    Disclaimer !! This is officially not supported by VMware, please do not use this for production or evaluation.

    Now that’s out of the way let’s get started.

    Configure vSAN with iSCSI disks

    Before you continue present the iSCSI LUNs to your ESXi hosts. But be aware that you don’t share the LUNs across the ESXi hosts, present a dedicated set of LUNs for vSAN per ESXi host.

    Open the vSphere Web Client and mark one of the presented iSCSI LUNs as a Flash device.
    Click on Hosts and Clusters -> Select a Host -> Configure -> Storage Devices -> Select the iSCSI LUN to be marked as SSD -> All Actions -> Mark as Flash Disk.

    And voilà! The disk device is now marked as a Flash Disk instead of a Hard Disk.

    Repeat the last step for all your ESXi hosts contributing storage to the vSAN Disk Group!

    The next step is to SSH to your ESXi host and run the following command to allow iSCSI disks to be claimed by vSAN.


    Read More

    How to configure vCenter 6.5 as a Subordinate CA

    After getting super annoyed with clicking “Advanced” and then “Proceed to vCenter (unsafe)” every single time I needed to go to the vSphere Web Client it was time for me to solve this once and for all.

    Subordinate CA

    Let’s get started!


  • Configured Microsoft CA (link)
  • vSphere and vCenter Certificate Templates (link)

    Generate Certificate Signing Request (CSR)

    SSH to your vCenter Server when using vCenter with the Embedded Platform Service Controller (PSC) or SSH to PSC when using the external PSC.

    Enable the BASH shell and set it to the default shell (link).

    Run /usr/lib/vmware-vmca/bin/certificate-manager and Select Option 2.


    Read More

    Automating the vRealize Automation Manager Service Failover

    During a couple of vRealize Automation (vRA) design engagements I had to explain that the vRealize Automation Manager Service doesn’t have an Automated Failover process (active/passive) and relies on a manual intervention. This was quite hard for the customers to understand and accept because of active / active redundancy of other vRA components like the Web Service.

    So OK what does the vRA Manager Service do (link)?

    The Manager Service is a Windows service that coordinates communication between IaaS DEMs, the SQL Server database, agents, and SMTP. IaaS requires that only one Windows machine actively run the Manager Service. For backup or high availability, you may deploy additional Windows machines where you manually start the Manager Service if the active service stops.

    And that last part is something my customers didn’t like (at all) because this depends on a person to activate the service manually. OK then how can we solve this?

    Automating the Manager Service Failover

    I like to keep things simple and wanted to Automate the Manager Service failover with vRealize Operations (vROps) monitoring the service and kicking off an action when the service is down. Eventually I got this to work but this took way too much effort and didn’t like the complex setup of vROps sending a SNMP trap to vRO and then let vRO kick off a Powershell script on the vRA IaaS Manager server. So back to the drawing board and the solution was way too simple… Running a scheduled task on the Secondary vRA IaaS Manager server that checks the Manager Service on the Primary and then starts it locally when the service is down.


  • Powershell allows the execution of scripts
  • Scheduled task is running under the vRA Service Account
    The Script


    Read More

    NSX for vSphere Configuration Maximums

    This post describes the NSX for vSphere Configuration Maximums for version 6.0.x, 6.1.x and 6.2.x.

    NSX for vsphere Configuration Maximums

    Whenever I got into a discussions about sizing, scalability and maximums of NSX I always turned to an excellent post written by Martijn Smit. But this post only contained information until version 6.1.x and not the latest version 6.2.x. And then during one of my projects some questions around the scalability of version 6.2.x came up and we had to do some research to find these scalability numbers. You can find the results of that research below.

    UPDATE: VMware has requested that I take down this article because I’ve used internal numbers that are only for use to help the field design deployments. Without other design information the numbers alone it could create support and competitive issues. Therefore I updated the article and removed the items that cannot be found on the official documentation center.

    NSX 6.0 NSX 6.1 NSX 6.2
    vCenter 1 1 1
    Controllers 3 3 3
    vCenter Clusters 12 12 16
    Hosts per Cluster 32 32 32
    Hosts per Transport Zone 256 256 256
    Logical Switch 10,000 10,000 10,000
    Logical Switch Ports 50,000 50,000 50,000
    VXLAN/VLAN l2 Bridges per DLR 500 500 500
    Identity Firewall
    Active Directory Groups 3 500 500 500
    Users per Group 3 1,000 1,000 1,000
    Max # of users in a domain 3 500,000 500,000 500,000
    VMs joined to a domain 1,000 1,000 1,000
    Maximum # of domains 10 10 10
    L3 Distributed Logical Router (DLR)
    DLRs per ESXi host 100 1000 1000
    DLRs per NSX Manager 1200 1200 1200
    Interfaces per DLR 991 991 991
    Uplinks per DLR 8 8 8
    Interfaces per ESXi Host 10,000 10,000 10,000
    OSPF Adjacencies per DLR 10 10 10
    BGP Neighbors per DLR 10 10 10
    Maximum Paths with ECMP 8 8
    L3 Edge Service Gateway (ESG)
    Maximum # of ESGs 4 2,000 2,000 2,000
    Maximum # Interfaces 10 10 10
    Maximum # of Sub-interfaces 200 200
    Secondary IP addresses 2,000 2,000 2,000

    1 As this is depending on multiple factors, please contact VMware for accurate estimate.
    2 Number of records over 15 days.
    3 At the moment there is no upper limit on any of the numbers, so the Active Directory Synchronization may work with even larger Active Directory setup.
    4 HA does not have an impact on the maximum number of ESGs.

    NSX for vsphere Configuration Maximums Disclaimer

    These numbers can be used as a guidance and are not 110% confirmed by VMware. I’m still hopeful that VMware soon will publish an official NSX Maximum Configurations document and we do not have to gather these numbers from everywhere anymore. Only time will tell 🙂 enjoy!

    Read More

    Trend Micro Deep Security and NSX 6.2.3 issue

    Last week I had the pleasure of upgrading vCNS 5.5.4 to NSX 6.2.3 at a customer that also was running Trend Micro Deep Security 9.6 SP1. Before the upgrade I checked the compatibility matrices here, here, here and here and it looked like everything checked out. So I went ahead with the upgrade and the upgrade went super smooth and ran without any issues. After the upgrade was completed I linked the Trend Micro Deep Security Manager to the NSX Manager and we protected the VMs and again all looked good. But then… I ran into the most annoying error know to man (with Trend Micro Deep Security) “Anti-Malware Engine Offline” and “Web Reputation Engine Offline”.

    NSX 6.2.3

    Oh Joy!

    Let the troubleshooting begin!

    • Filter Drivers ESXi hosts
    • Check, all ESXi hosts have the Filter Driver Removed.
  • Guest Introspection Drivers VMware Tools
    • Check, all VMs have an updated version of the VMware Tools with the Guest Introspection option enabled.
  • Licensing NSX
    • Check, NSX 6.2.3 is licensed as “NSX for vSphere”.
  • Licensing Trend Micro Deep Security
    • Check, Anti-Malware and Web Reputation is licensed.
  • NSX Security Policy
    • Check, the correct NSX Security Policy is in place and applied on all VMs.
  • NSX Guest Introspection Service VMs
    • Check, the NSX Guest Introspection Service VMs are deployed and service is up and running.
  • Trend Micro Deep Security Service VMs
    • Check, the Trend Micro Deep Security Service VMs are deployed and service is up and running.
  • Trend Micro Deep Security Policy
    • Bingo! Disabling the Web Reputation solved also the “Anti-Malware Engine Offline” error. We have a lead!


    Read More

    NSX Manager SFTP Backup

    During my last couple of NSX projects the backup of the NSX Manager proved to be some kind of a challenge. Using the NSX manager, it is possible to create backups via the FTP or the SFTP transfer protocol, but because we wanted to adhere the NSX hardening recommendations SFTP is preferred transfer protocol. No biggie you would think, except that most of the customers did not possessed the proper SFTP (don’t confuse with FTPS!!) software to support this.

    Why is it so important to create a proper backup of the NSX Manager? Well that’s because the backup contains the following components :

  • NSX configuration
  • NSX Controllers configuration
  • Logical switches configuration
  • Routing configuration
  • Security groups, policies and settings
  • All firewall rules
  • And simply everything else that you configure within the NSX Manager UI or API
    I think you now understand why you want to have these settings safely stored away.

    So what are our options? On the authors created a list of stand-alone SFTP servers that can be used for this task. For some customers it is difficult to procure these types of software online and rather use “freeware”. Then the next problem arises, some companies won’t use encryption software if it’s not commercial… Yeah I love those discussion with the security guys 🙂 .

    OK so just for the sake of it (and I’m not bound by any security guys looking over my shoulders) I’m just going for the NSX Manager SFTP Backup based on FreeFTPd for Windows.

    Read More