Browse Category

VMware

VMware products

vROps cloud API getting started

I wanted to keep track of what needs to be done prior to actually being able to query API on vROps cloud. Ive been having a hard time finding the documentation i needed in the past

First step is to get an API token for the specific username. We can do this by going to My account under User Settings in vROps Cloud

Go to API Tokens and click on Generate a new API token

Give it a name and select what it will have access to and click on Generate

Once we have the api token generated we can use it to generate an access token by running

curl -k -X POST "https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/authorize" -H "accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" -d "refresh_token=token_generated_earlier"

Now we can use the output from “access_token”. vROps Swagger API can be found here

Here is an example to get the current logged on user

curl -X GET "https://www.mgmt.cloud.vmware.com/vrops-cloud/suite-api/api/auth/currentuser?_no_links=true" -H "accept: application/json" -H "Authorization: CSPToken <access-token>"

Full guide from VMware available here

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.4.1.1

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.4 release. The release notes can be found here

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.4.1.1

Click on Upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.4.1.1 once completed. We can check the version by going to Settings -> System Details

If you get the below error clear the browser cache and try again

Changing passwords for the vRealize suite via vRSLCM (vRealize Suite Lifecycle Manager)

In this guide i will go over one of the Locker feature in vRealize Lifecycle Manager specifically the password management.

As a reminder vRSLCM can manage the following:

Type of Password ChangevRealize Product Name
Admin Password ChangevRealize Automation, vRealize Operations Manager, vRealize Network Insight, vRealize Log Insight, VMware Identity Manager
Root Password ChangevRealize Automation, vRealize Operations Manager, vRealize Network Insight, vRealize Log Insight, VMware Identity Manager
Support Password ChangevRealize Network Insight
Console User Password ChangevRealize Network Insight
SSH User Password ChangeVMware Identity Manager

The first step is to create a new password. We can do so by going to Locker from the welcome screen or the menu on top right

Once in Locker, we can check to see where a specific password might be used. In my case i had just deployed most of the vRealize products and i used in the InstallerPassword reference

Once we clicked on InstallerPassword we can see some details about the password. Click on References to see where the password is used

Next we can go back a level by clicking on password and click add to add a new password to the inventory

Complete the required fields and click add

Now that we have the password created we can go to the Lifecycle Operations service to update the password for the vRealize products. Click on the Menu on the top right and select lifecycle operations

Go to environments and click on view details on the environment where we want to update the password. In my case i will update my vRealize environment

Im going to update my vROPS instance. On the menu under the product select the node on the left side and click on change password towards the right

On the next screen we need to pick the current password for the environment and the new password that were changing to

Lifecycle manager went though and updated the password and its associations

Trying to log in using the root user confirms that the password has been changed

Upgrading vRA (vRealize Automation) to 8.4

In this post i will go over upgrading my 8.x vRA appliance to 8.4. As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.4. Instructions can be found here

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you havent added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

While the download is going check hardware requirements for vRealize Automation 8.4.0 here. If the VMware Identity Manager hardware needs to be resized refer to the documentation: Re-Size Hardware Resources for VMware Identity Manager

After the download is complete we can go to Environments -> View Details on the environment that includes vRA

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.4.0 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

Since we already checked the sizing while the download was occurring we can check the check box for I took care of the manual steps above and am ready to proceed and click on run precheck

Run the Precheck to make sure there are no errors

Once the check is complete, click on Next. Review the upgrade details and click on Next. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.4

Disable vCLS (vSphere Cluster Services) in vSphere

While doing maintenance on my vSAN cluster recently i had the need to disable the vCLS in order to fully shut down the cluster. Doing some reasearch i found kb article 80472 that talk about temporarily disabling the service in order to perform maintenance. The steps are fairly easy

First we need to get the Cluster id from vSphere. To do so all we need to do is select the cluster and look at the url. For example

 https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary

In the case above all we care about is the 8 number in domain-c8.

Next we need to navigate to the vCenter server -> Configure -> Advanced Settings example:

Click on edit Setting to the right:

Add a new key. Replace the <number> with the number found in the previous step

config.vcls.clusters.domain-c<number>.enabled and value False

It would look like this:

Click on Add and click Save

vCLS monitoring will initiate a clean-up of the VMs and we should notice that all of the vCLS VMs are gone.

After the maintenance is complete dont forget to set the same value to True in order to re enable the HA and DRS services.

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.4

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.4 release. The release notes can be found here

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.4

Click on Upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.4 once completed. We can check the version by going to Settings -> System Details

If you get the below error clear the browser cache and try again

Shutting down a vSAN Cluster

I have the need to completely shut down some of my vSAN clusters for various clusters and ive been having a hard time finding the proper procedure. As of 2/16/2021 VMware released guidance here

Here are the steps i took to do it on my end. If you have the vCLS service enabled follow my other instructions here prior to starting the rest of this guide.

Disable cluster member updates from vCenter on each ESXi host in the cluster by running

esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates

After the above is completed run the below only on 1 of the ESXi hosts. Take note of the host

python /usr/lib/vmware/vsan/bin/reboot_helper.py prepare

Place all ESXi hosts in maintenance mode with NoAction

esxcli system maintenanceMode set -e true -m noAction

Perform the necessary maintenance. Once the hosts are back up we will run the above in reverse

Remove the maintence mode on all ESXi hosts by running

esxcli system maintenanceMode set -e false

Run the below command on the same host it was originally run

python /usr/lib/vmware/vsan/bin/reboot_helper.py recover

vSAN availability can be checked by running

esxcli vsan cluster get

Enable cluster member updates for vCenter

esxcfg-advcfg -s 0 /VSAN/IgnoreClusterMemberListUpdates

Upgrading VCF 4.1.0.0 to 4.2.0.0 Step by Step

With the release of of VCF 4.2 i wanted to get my lab upgraded. The release blog can be found here and the release notes are here

In order to get to 4.2.0.0 we have to upgrade to 4.1.0.1. We can do so by going to Repository -> Bundle management -> Download now

The next step is to upgrade VCF by going to Inventory -> Workload Domains -> Select the workload domain -> Update/Patches -> Update Now for the VMware Cloud Foundation Update 4.1.0.1. The release notes can be found here

Next we are taken to the Upgrade page where we can follow the upgrade for each one of the components

Once the upgrade is complete we can click Finish to be returned back to the main screen

Because we are changing the SDDC-Manager versions i would strongly recommend to clear cache and log back in before going forward.

Next is the 4.2.0.0 update. Repository -> Bundle management -> Download now. In my case i already had it downloaded so the next step is to apply the upgrade by going to Inventory -> Workload Domains -> Select the workload domain -> Update/Patches -> Update Now for the VMware Cloud Foundation Update 4.2.0.0. The release notes can be found here

Once the upgrade starts we can follow its progress

Once the upgrade is completed we can click finish and go to the next step

Again i would recommend clearing the cache since we changed sddc-manager versions.

Once the upgrade is complete we are taken back to the previous page where we can see that the ESXi servers are next. The release notes can be found here. Click on Download Now

Once the download is complete we can click on Update now

If we have multiple clusters we can enable Cluster-level selection and select the specific cluster(s) we want to upgrade.

We can also enable sequential cluster upgrade as well as quick boot

We get to review the options once again before we click finish to to submit the task

Once submitted we can view the status by clicking on View Status

And with that we are finished with the workload domain. We can get back to the Update/Patches page

The next update is the configuration drift bundle. We can go to inventory -> Workload Domains -> Select the workload domain -> Update/Patches -> Download now. You will notice a new drop down that allows us to pick the Cloud Foundation version.

Once the download is complete click on update now

Once the upgrade started i got redirected to the Update status page.

Considering the update is only 219 MB the upgrade went through pretty quickly. Once its completed we can click finish to get back to the main sddc manager page

Next step is to upgrade NSX-T installation to NSX-T 3.1.0. The release notes can be found here. We can go to inventory -> Workload Domains -> Select the workload domain -> Update/Patches -> Download now.

Once the download is complete click on Update Now

We can view the status and the steps by clicking on View Status.

Once the upgrade is complete we are redirected back to the available updates page showing that the vCenter server is next

Click on Download now and wait for the download to complete. Once the download is complete click on update now

We can view the task by clicking on View Status

Next are the the additional domains that we might have where we can follow the same instructions as above. The process will be allot quicker because the upgrades are already downloaded

If no additional upgrades are needed we can clean up the downloads by following the instructions on my other post here

vRA cloud API getting started

I wanted to keep track of what needs to be done prior to actually being able to query API on vRA cloud. Ive been having a hard time finding the documentation i needed in the past

First step is to get an API token for the specific username. We can do this by going to My account under User Settings in vRA Cloud

Go to API Tokens and click on Generate a new api token

Give it a name and select what it will have access to and click on Generate

Once we have the api token generated we can use it to generate an access token by running

curl --location --request POST 'https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/authorize' --header 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'refresh_token=api token generated earlier'

Now we can use the output from “access_token”. There`s a number of Cloud Assembly examples here

Full guide available here

vIDM upgrade to 3.3.4 no networking detected

If you are like me and you tried to perform an upgrade of vIDM 3.3.x to 3.3.4 you were most likely greeted by no network connectivity after upgrade with the following screen:

If you made a backup of the network configuration this is where we would restore it.

If not we can perform it manually by running

/opt/vmware/share/vami/vami_config_net

Press 6 and go through the screens to configure the ip

Press 2 for default gateway

Press 4 for the DNS

Press 0 to show the current configuration

If you notice in my configuration the dns server didnt take. In order to fix it i recreated the /etc/resolv.conf.

Remove the resolv.conf running

rm -f /etc/resolv.conf

Create a symlink for resolv.conf

ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

In my case vIDM was still not responding so i had to reboot the server. After the reboot everything started working properly