VMware vCenter 6 plugin errors after upgrade

Last week I upgraded a customers vSphere 5.5 environment to vSphere 6. Everything went smooth, upgraded VSAN to v2 and even Veeam picked up the new vCenter 6 appliance without a hitch. But when I opened the vSphere Client (yeah I still use it sometimes… sorry for that) there were two plugin errors.
 
vCenter-Plugin
 

VMware vCenter Storage Monitoring Service

The plug-in failed to load on server(s) xxxxx due to the following error: Could not load file or assembly ‘VI, Version=5.5.0.0, Culture=neutral, PublicKeyToken=xxxxx’ or one of its dependencies. The system cannot find the file specified.

This is an expected behaviour in VMware vCenter 6 because the “Storage Views” tab is no longer available in the vSphere 6.0 Client.

The cure is easy, just uninstall the old vSphere 5.x client(s) and the “error” is gone!

Auto Deploy

The following error occurred while downloading the script plugin from https://xxxxx:6502/vmw/rbd/extensions.xml: The request failed because of a connection failure. (Unable to connect to the remote server)

There are 2 ways to solve this issue :
(more…)

Read More

Start, Shut Down or Restart vRealize Automation

vRA-Logo

The start, shut down or restart sequence of vRealize Automation (vRA) isn’t that difficult, but if you throw in a load balancer or if you have a bigger distributed vRA environment then things get interesting.

Load Balancer

When you are using a load balancer in your configuration you need to check your load balancer for what kind of “health monitor” you are using for the vRA Appliances. Because if you are using the vRA Appliance service “health monitor”, you need to change it to just plain old ICMP or the vRA service won’t come online after a cold boot.

vRA-LB

Start vRealize Automation

When you start vRA after a power outage or a controlled shut down, you must start the vRA components in this specified order :

  1. Boot the MSSQL Server / Cluster
    • Wait until Service is up
  2. Boot the PostgresSQL Server / Cluster
    • Wait until Service is up
  3. Boot the Identity Appliance or SSO Server
    • Wait until Service is up
  4. Boot the Primary vRealize Appliance
    • Wait until VM is up
  5. Boot the optional Secondary vRealize Appliance
    • Wait until VM is up
  6. Boot the Primary Web Server
    • Wait until VM is up
  7. Boot the optional Secondary Web Server
    • Wait until VM is up and wait 5 minutes
  8. Boot all the Manager Servers
    • Wait until VMs are up and wait 2-5 minutes
  9. Boot all the vRealize Automation Agent Servers
    • Wait until VMs are up
  10. Boot all the Distributed Excecution Manager Orchestrator / Worker Servers
    • Wait until VMs are up

(more…)

Read More

vRealize Operations HA Using an F5 Load Balancer

Last week I was configuring vRealize Operations (vROps) HA using an F5 Load Balancer. The deployment of a vROps HA cluster is pretty straight forward, the only “big” challenge I encountered was the creation of the F5 health monitor for the vROps HA cluster nodes. The reason for this is that when you use the default TCP or HTTPS monitor checks you don’t know for sure if the vROps cluster node is up and running but only that it responded on HTTPS. And that’s not what we wanted!

After some digging around in the normal vROps REST API I couldn’t find any way to check the status of the vROps HA nodes, but then I found the CaSA (Cluster and Slice Administration) REST API and there is a way to check if a slice (cluster node) is online or not.

Disclaimer : Beware the CaSA API is currently a private API and may change in future releases

Cluster node / slice Online :

vROPS-HA-F5-01

Cluster node / Slice Offline :

vROPS-HA-F5-02

So now we know this we can use this for our F5 health monitor!
  (more…)

Read More