EMC Avamar Plugin for vRealize Automation

For a vRealize project that I’m currently working on one of the requirements is that the provisioned VM is added to EMC Avamar Data Protection. Normally we do this by creating REST API calls to the Avamar server, which works OK but can’t we do it simpler than that? And now we can!
Introducing the EMC Avamar Plugin for vRealize Automation.


This post will describe how to install the EMC Avamar Plugin for vRealize Automation and add day two operations to the through vRA provisioned VMs.


  • vRealize Automation (vRA) 6.x installed
  • vRealize Orchestrator (vRO) 5.5.x / 6.x installed and configured in vRA
  • vRealize Automation IaaS must be configured to use vRO workflows for customizations (link)
  • EMC Avamar 7.1.x configured and the VMware vCenter client configured
  • EMC Avamar Plugin for vRealize Automation Plugin
  • Read More

    VMware SRM and EMC VNX NFS IPv6 issue

    Today I was updating VMware Site Recovery Manager (SRM) for a customer after their EMC VNX arrays were upgraded to a new firmware level.

    It all went smooth and SRM seemed pretty happy with all green checkboxes, so it was time to run the Test Recovery Plan. But then the Test Recovery Plan failed, and returned the following error : “Error – Failed to recover datastore ‘DatastoreName’. An error occurred during host configuration”.


    And another beautiful “An unknown error has occurred” message appeared in the Task & Events tab of the ESXi hosts where SRM tried to configure the NFS Export on.


    According to the message the issue seems to be that the NFS Export couldn’t be mounted on the ESXi host, which is weird because a. the NFS exports could be mounted before and b. the hosts have active mounted NFS exports from the same storage array.

    After some troubleshooting and digging through the log files the only thing that seemed different than before was that now the IPv6 addresses were added to the “Access hosts” list of the fail-over test NFS Export that SRM created. This is because SRM queries the ESXi hosts for all (!!) VMkernel ports and add those IP addresses to the “Access hosts” of the fail-over test NFS Export, including the IPv6 addresses.

    Fortunately for me this customer doesn’t use IPv6 internally so it was quite easy to test if disabling IPv6 would solve the issue. And it did! After disabling the IPv6 the NFS Exports were mounted and the VMware SRM Recovery Plans were finished successfully.

    IPv6 can disabled on a v5.x or v6.x ESXi host by running the following command from the CLI :

    Or through the Web Client : Host -> Manage -> Networking -> Advanced -> Edit -> IPv6 Support -> Disabled


    Or just from the good old vSphere Client : Host -> Configuration -> Networking -> Properties -> Uncheck “Enable IPv6 support on this host system”


    And then reboot the host.

    We have submitted a SR to EMC and I’ll update this article if we receive any feedback on this issue.

    Optimize ESXi for EMC XtremIO

    For a project I’m currently working on, I was asked to document the ESXi hosts recommended / optimal settings required to get the best performance from an EMC XtremIO Storage Array. This was a good opportunity for me to dive a little bit deeper in the configuration, do some performance testing and share the results.

    ESXi XtremIO

    ESXi XtremIO Host Settings

    Set the maximum number of consecutive “sequential” I/Os allowed from one VM before switching to another VM: Read More