NSX IPv6 Support

This week I got some good questions from a customer about NSX, especially on NSX IPv6 support.
 

NSX IPv6

And I knew which features are not supported:

  • Distributed logical router: The DLR does not support IPv6 forwarding / routing.
  • Dynamic routing (OSPF, BGP): Only IPv6 static routes are supported on the Edge Services Gateway.
  • NAT, SLAAC and DHCPv6 on NSX Edge: The workloads should use static IPv6 address allocation.
  • But I couldn’t immediately answer the question which components of NSX supported what connectivity like IPv4, IPv6 or dual stack. To make things worse the NSX 6.2 Documentation Center does not contain a lot of information about the IPv6 support… Luckily for me (and the customer) some insiders provided me the necessary information and I would like to share this with you.
     

    Detailed NSX IPv6 Support

    (more…)

    Component Feature Support1 Notes
    VM Addressing
    Guest VM Addressing IPv4, IPv6, DS VXLAN encap packets are capable of carrying IPv6 payload. VMs can have only IPv6 static addresses. SLAAC (RA) and DHCPv6 (relay and server) are not supported
    VXLAN Transport IPv4
    NSX Manager
    NSX Manager IP IPv4, IPv6, DS
    NSX Controller
    Management IP IPv4

    Read More

    Configure vSAN with iSCSI based disks

    Every time I needed to test something on vSAN in my vLab, I had to spin up some dusty nested ESXi nodes because I don’t have the proper disks in my test NUCs. But now thanks to William Lam I found a way to allow vSAN to claim iSCSI disks and let them contribute to the vSAN Disk Group! This way I can leave the nested ESXi nodes powered off and very easily test disk and host failures and play around with the Ruby vSphere Console (RVC).

    Below you will see a 3 node NUC cluster without any local storage running vSAN! Pretty cool stuff!
     

    Configure vSAN
     

    I really hope that I don’t have to explain this, but then again better safe than sorry. 🙂

    Disclaimer !! This is officially not supported by VMware, please do not use this for production or evaluation.

    Now that’s out of the way let’s get started.
     

    Configure vSAN with iSCSI disks

    Before you continue present the iSCSI LUNs to your ESXi hosts. But be aware that you don’t share the LUNs across the ESXi hosts, present a dedicated set of LUNs for vSAN per ESXi host.

    Open the vSphere Web Client and mark one of the presented iSCSI LUNs as a Flash device.
    Click on Hosts and Clusters -> Select a Host -> Configure -> Storage Devices -> Select the iSCSI LUN to be marked as SSD -> All Actions -> Mark as Flash Disk.

    And voilà! The disk device is now marked as a Flash Disk instead of a Hard Disk.

    Repeat the last step for all your ESXi hosts contributing storage to the vSAN Disk Group!

    The next step is to SSH to your ESXi host and run the following command to allow iSCSI disks to be claimed by vSAN.

    (more…)

    Read More

    How to configure vCenter 6.5 as a Subordinate CA

    After getting super annoyed with clicking “Advanced” and then “Proceed to vCenter (unsafe)” every single time I needed to go to the vSphere Web Client it was time for me to solve this once and for all.

     
    Subordinate CA

    Let’s get started!

    Pre-requisites

  • Configured Microsoft CA (link)
  • vSphere and vCenter Certificate Templates (link)
  •  

    Generate Certificate Signing Request (CSR)

    SSH to your vCenter Server when using vCenter with the Embedded Platform Service Controller (PSC) or SSH to PSC when using the external PSC.

    Enable the BASH shell and set it to the default shell (link).

    Run /usr/lib/vmware-vmca/bin/certificate-manager and Select Option 2.

    (more…)

    Read More

    Automating the vRealize Automation Manager Service Failover

    During a couple of vRealize Automation (vRA) design engagements I had to explain that the vRealize Automation Manager Service doesn’t have an Automated Failover process (active/passive) and relies on a manual intervention. This was quite hard for the customers to understand and accept because of active / active redundancy of other vRA components like the Web Service.

    So OK what does the vRA Manager Service do (link)?

    The Manager Service is a Windows service that coordinates communication between IaaS DEMs, the SQL Server database, agents, and SMTP. IaaS requires that only one Windows machine actively run the Manager Service. For backup or high availability, you may deploy additional Windows machines where you manually start the Manager Service if the active service stops.

    And that last part is something my customers didn’t like (at all) because this depends on a person to activate the service manually. OK then how can we solve this?

    Automating the Manager Service Failover

    I like to keep things simple and wanted to Automate the Manager Service failover with vRealize Operations (vROps) monitoring the service and kicking off an action when the service is down. Eventually I got this to work but this took way too much effort and didn’t like the complex setup of vROps sending a SNMP trap to vRO and then let vRO kick off a Powershell script on the vRA IaaS Manager server. So back to the drawing board and the solution was way too simple… Running a scheduled task on the Secondary vRA IaaS Manager server that checks the Manager Service on the Primary and then starts it locally when the service is down.

    Pre-requisites

  • Powershell allows the execution of scripts
  • Scheduled task is running under the vRA Service Account
  •  
    The Script

    (more…)

    Read More

    NSX for vSphere Configuration Maximums

    This post describes the NSX for vSphere Configuration Maximums for version 6.0.x, 6.1.x and 6.2.x.

    NSX for vsphere Configuration Maximums

    Whenever I got into a discussions about sizing, scalability and maximums of NSX I always turned to an excellent post written by Martijn Smit. But this post only contained information until version 6.1.x and not the latest version 6.2.x. And then during one of my projects some questions around the scalability of version 6.2.x came up and we had to do some research to find these scalability numbers. You can find the results of that research below.
     

    UPDATE: VMware has requested that I take down this article because I’ve used internal numbers that are only for use to help the field design deployments. Without other design information the numbers alone it could create support and competitive issues. Therefore I updated the article and removed the items that cannot be found on the official documentation center.

    NSX 6.0 NSX 6.1 NSX 6.2
    Nodes
    vCenter 1 1 1
    Controllers 3 3 3
    vCenter Clusters 12 12 16
    Hosts per Cluster 32 32 32
    Hosts per Transport Zone 256 256 256
    L2
    Logical Switch 10,000 10,000 10,000
    Logical Switch Ports 50,000 50,000 50,000
    VXLAN/VLAN l2 Bridges per DLR 500 500 500
    Identity Firewall
    Active Directory Groups 3 500 500 500
    Users per Group 3 1,000 1,000 1,000
    Max # of users in a domain 3 500,000 500,000 500,000
    VMs joined to a domain 1,000 1,000 1,000
    Maximum # of domains 10 10 10
    L3 Distributed Logical Router (DLR)
    DLRs per ESXi host 100 1000 1000
    DLRs per NSX Manager 1200 1200 1200
    Interfaces per DLR 991 991 991
    Uplinks per DLR 8 8 8
    Interfaces per ESXi Host 10,000 10,000 10,000
    OSPF Adjacencies per DLR 10 10 10
    BGP Neighbors per DLR 10 10 10
    Maximum Paths with ECMP 8 8
    L3 Edge Service Gateway (ESG)
    Maximum # of ESGs 4 2,000 2,000 2,000
    Maximum # Interfaces 10 10 10
    Maximum # of Sub-interfaces 200 200
    Secondary IP addresses 2,000 2,000 2,000

    1 As this is depending on multiple factors, please contact VMware for accurate estimate.
    2 Number of records over 15 days.
    3 At the moment there is no upper limit on any of the numbers, so the Active Directory Synchronization may work with even larger Active Directory setup.
    4 HA does not have an impact on the maximum number of ESGs.
     

    NSX for vsphere Configuration Maximums Disclaimer

    These numbers can be used as a guidance and are not 110% confirmed by VMware. I’m still hopeful that VMware soon will publish an official NSX Maximum Configurations document and we do not have to gather these numbers from everywhere anymore. Only time will tell 🙂 enjoy!

    Read More

    Trend Micro Deep Security and NSX 6.2.3 issue

    Last week I had the pleasure of upgrading vCNS 5.5.4 to NSX 6.2.3 at a customer that also was running Trend Micro Deep Security 9.6 SP1. Before the upgrade I checked the compatibility matrices here, here, here and here and it looked like everything checked out. So I went ahead with the upgrade and the upgrade went super smooth and ran without any issues. After the upgrade was completed I linked the Trend Micro Deep Security Manager to the NSX Manager and we protected the VMs and again all looked good. But then… I ran into the most annoying error know to man (with Trend Micro Deep Security) “Anti-Malware Engine Offline” and “Web Reputation Engine Offline”.
     

    NSX 6.2.3

    Oh Joy!
     

    Let the troubleshooting begin!

    • Filter Drivers ESXi hosts
    • Check, all ESXi hosts have the Filter Driver Removed.
  • Guest Introspection Drivers VMware Tools
    • Check, all VMs have an updated version of the VMware Tools with the Guest Introspection option enabled.
  • Licensing NSX
    • Check, NSX 6.2.3 is licensed as “NSX for vSphere”.
  • Licensing Trend Micro Deep Security
    • Check, Anti-Malware and Web Reputation is licensed.
  • NSX Security Policy
    • Check, the correct NSX Security Policy is in place and applied on all VMs.
  • NSX Guest Introspection Service VMs
    • Check, the NSX Guest Introspection Service VMs are deployed and service is up and running.
  • Trend Micro Deep Security Service VMs
    • Check, the Trend Micro Deep Security Service VMs are deployed and service is up and running.
  • Trend Micro Deep Security Policy
    • Bingo! Disabling the Web Reputation solved also the “Anti-Malware Engine Offline” error. We have a lead!

     
    (more…)

    Read More

    NSX Manager SFTP Backup

    During my last couple of NSX projects the backup of the NSX Manager proved to be some kind of a challenge. Using the NSX manager, it is possible to create backups via the FTP or the SFTP transfer protocol, but because we wanted to adhere the NSX hardening recommendations SFTP is preferred transfer protocol. No biggie you would think, except that most of the customers did not possessed the proper SFTP (don’t confuse with FTPS!!) software to support this.

    Why is it so important to create a proper backup of the NSX Manager? Well that’s because the backup contains the following components :

  • NSX configuration
  • NSX Controllers configuration
  • Logical switches configuration
  • Routing configuration
  • Security groups, policies and settings
  • All firewall rules
  • And simply everything else that you configure within the NSX Manager UI or API
  •  
    I think you now understand why you want to have these settings safely stored away.

    So what are our options? On SFTP.net the authors created a list of stand-alone SFTP servers that can be used for this task. For some customers it is difficult to procure these types of software online and rather use “freeware”. Then the next problem arises, some companies won’t use encryption software if it’s not commercial… Yeah I love those discussion with the security guys 🙂 .

    OK so just for the sake of it (and I’m not bound by any security guys looking over my shoulders) I’m just going for the NSX Manager SFTP Backup based on FreeFTPd for Windows.
    (more…)

    Read More

    VCDX #223

    The two week waiting period after the defense were maddening. I had mood swings ranging from “what the F did I do” to “maybe juuuuuust maybe”.

    And then finally after two weeks of nightmares and nail biting “the” defense results email arrives, I think I’ll never forget the moment that I received “the” email…

    I was driving home from work and got a message from my fellow VCDX wannabe Matthew Bunce stating that someone already got his defense results. While reading this message an email notification appeared in the top of my phone called “VCDX – VCDX-DCV Defense Results”, my heart dropped to the bottom of my car. I managed to raise my finger over the the notification, swipe it to the right and read the first line of the email.
     

    VCDX-Results

     
    I remember thinking to myself, how do you mean “Congratulations! You passed!”? What the? I Passed!?! After I almost crashed my car into the crash barier (this proves again you should not read emails on your phone while driving!) I directly phoned Matthew and he also received the same good news! The VCDX directory has two new additions to the VCDX family, VCDX #222 and #223! Happy days!

    You can read more about my VCDX journey and background on the blog of Gregg Robertson in his VCDX Spotlight section : The Saffageek VCDX Spotlight – Marco van Baggum

    It now has been more than two weeks ago since I got my VCDX results back and it finally starts to sink in, I actually PASSED! I’m still over the moon and can’t wait to get started on some new projects and write some overdue blog post.

    To be continued!

    VCDX #223

    Read More

    Upgrade vRealize Automation 6.2 To 7.x

    This post describes how to upgrade vRealize Automation 6.2 to 7.x. Before performing this upgrade please read my previous post “vRealize Automation 7 Upgrade Considerations“, this post describes multiple pitfalls and could prevent potential issues.

    Done reading? OK then let’s start!
     

    upgrade vRealize Automation
     

    Step 1 : Backup current Installation

    Before you do anything backup your current installation! Believe me when I say this is a critical step, if something goes wrong you don’t want to rely only on a VM snapshot…
     

    Step 2 : Shutdown vRealize Automation services on your IaaS server

    Shut down services in the following order on the IaaS servers. But be absolutely sure not to shut down the actual machine, otherwise the appliance upgrade will fail.
    Each virtual machine has a Management agent, which should be stopped with each set of services.

  • All VMware vCloud Automation Center agents
  • All VMware DEM workers
  • VMware DEM orchestrator
  • VMware vCloud Automation Center Service
  •  
    (more…)

    Read More

    vRealize Automation 7 Upgrade Considerations

    For an engagement last week, I had to find out if there are any considerations for performing an in place upgrade to vRealize Automation 7. And funny enough I found a few…
     

    vRealize Automation 7 upgrade
     

    vRealize Automation 7 Upgrade Considerations

    • Minimum upgrade version to vRA 7.0 is vRA 6.2.x
    • Note : vRA 6.2.4 will not be supported for upgrade to 7.0 until 7.x
  • vRA 7.0 will only work with vRO 7.0
  • Customers with vRA 6.0 / 6.1 need to upgrade to 6.2.x first
  • The upgrade process to vRA 7.0 will stop if :
    • Physical Endpoints are detected
    • vCloud Director Endpoints are detected
  • Application Services Blueprints will not be migrated
  • Add component for Multi Machine Blueprints will not be available in 7.0
  • vRA 7.0 vRO Plug-in is not backward compatible
  • Customizations that leverage Custom Components Catalog (CCC) and vCloud Automation Center Designer (CDK) will not be supported in 7.0
  •  

    Background Information :

     

    Physical Endpoints

    All previously supported physical endpoints like HP iLO, Cisco UCS, Dell iDRAC etc are not supported. I could not find any specific reason for it, only that it did not make the vRA 7.0 release.
    (more…)

    Read More